Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Compositional model checking of partially ordered state spaces Hazelhurst, Scott 1996

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1996-090930.pdf [ 9.96MB ]
Metadata
JSON: 831-1.0051407.json
JSON-LD: 831-1.0051407-ld.json
RDF/XML (Pretty): 831-1.0051407-rdf.xml
RDF/JSON: 831-1.0051407-rdf.json
Turtle: 831-1.0051407-turtle.txt
N-Triples: 831-1.0051407-rdf-ntriples.txt
Original Record: 831-1.0051407-source.json
Full Text
831-1.0051407-fulltext.txt
Citation
831-1.0051407.ris

Full Text

COMPOSITIONAL M O D E L CHECKING OF PARTIALLY ORDERED STATE SPACES by Scott Hazelhurst B.Sc.Hons, University of the Witwatersrand, Johannesburg M.Sc, University of the Witwatersrand, Johannesburg A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Department of Computer Science) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA January 1996 © Scott Hazelhurst, 1996 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for refer-ence and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is un-derstood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Computer Science The University of British Columbia 2366 Main Mall Vancouver, Canada V6T 1Z4 Date: Abstract Symbolic trajectory evaluation (STE) — a model checking technique based on partial order representations of state spaces — has been shown to be an effective model checking technique for large circuit models. However, the temporal logic that it supports is restricted, and as with all verification techniques has significant performance limitations. The demand for verifying larger circuits, and the need for greater expressiveness requires that both these problems be examined. The thesis develops a suitable logical framework for model checking partially ordered state spaces: the temporal logic TL and its associated satisfaction relations, based on the quaternary logic Q. TL is appropriate for expressing the truth of propositions about partially ordered state spaces, and has suitable technical properties that allow STE to support a richer temporal logic. Using this framework, verification conditions called assertions are defined, a generalised ver-sion of STE is developed, and three STE-based algorithms are proposed for investigation. Ad-vantages of this style of proof include: models of time are incorporated; circuits can be de-scribed at a low level; and correctness properties are expressed at a relatively high level. A primary contribution of the thesis is the development of a compositional theory for TL assertions. This compositional theory is supported by the partial order representation of state space. To show the practical use of the compositional theory, two prototype verification sys-tems were constructed, integrating theorem proving and STE. Data is manipulated efficiently by using binary decision diagrams as well as symbolic data representation methods. Simple heuristics and a flexible interface reduce the human cost of verification. Experiments were undertaken using these prototypes, including verifying two circuits from the IFIP WG 10.5 Benchmark suite. These experiments showed that the generalised STE al-gorithms were effective, and that through the use of the compositional theory it is possible to verify very large circuits completely, including detailed timing properties. ii Table of Contents Abstract ii List of Figures vii List of Tables viii List of Definitions, Theorems and Lemmas ix Acknowledgement xii Dedication xiii 1 Introduction 1 1.1 Motivation 1 1.2 Verification and the Use of Formal Methods 4 1.3 Partially-ordered State Spaces 6 1.3.1 Mathematical Definitions 7 1.3.2 Using Partial Orders 8 1.3.3 Symbolic Trajectory Evaluation 11 1.4 Research Contributions 12 1.5 Outline of Thesis 15 2 Issues in Verification 18 2.1 Binary Decision Diagrams 19 2.2 Styles of Verification 20 2.2.1 Property Checking 21 2.2.2 Modal and Temporal Logics 21 2.2.3 Model Comparison 23 2.3 Proof Techniques 24 2.3.1 Theorem Proving 25 2.3.2 Automatic Equivalence and Other Testing 27 2.3.3 Model Checking 27 2.4 Symbolic Trajectory Evaluation 33 2.4.1 Trajectory formulas 33 2.5 Compositional Reasoning 35 2.6 Abstraction 38 iii 2.7 Discussion 38 3 The Temporal Logic TL 41 3.1 The Model Structure 41 3.2 The Quaternary Logic Q 47 3.3 An Extended Temporal Logic 52 3.3.1 Scalar Version of TL 52 3.3.2 Some Laws of TL 59 3.3.3 Symbolic Version 60 3.4 Circuit Models as State Spaces 63 3.5 Alternative Definition of Semantics 67 4 Symbolic Trajectory Evaluation 69 4.1 Verification with Symbolic Trajectory Evaluation 70 4.2 Minimal Sequences and Verification 72 4.3 Scalar Trajectory Evaluation 75 4.3.1 Examples 79 4.3.2 Defining Trajectory Sets 81 4.4 Symbolic Trajectory Evaluation 83 4.4.1 Preliminaries 84 4.4.2 Symbolic Defining Sequence Sets 87 4.4.3 Circuit Models 89 5 A Compositional Theory for TL 91 5.1 Motivation 91 5.2 Compositional Rules for the Logic 92 5.2.1 Identity Rule 93 5.2.2 Time-shift Rule 93 5.2.3 ConjunctionRule 94 5.2.4 Disjunction Rule 95 5.2.5 Rules of Consequence 96 5.2.6 Transitivity 98 5.2.7 Specialisation 99 5.2.8 Until Rule 103 5.3 Compositional Rules for TL„ 104 5.4 Practical Considerations 106 5.4.1 Determining the Ordering Relation: is A}(G) \ZV A ' ( / i ) ? 106 5.4.2 Restriction to TL„ 110 5.5 Summary 113 iv 6 Developing a Practical Tool 115 6.1 The Voss System 116 6.2 Data Representation 120 6.3 Combining STE and Theorem Proving 123 6.4 Extending Trajectory Evaluation Algorithms 128 6.4.1 Restrictions 128 6.4.2 Direct Method 131 6.4.3 Using Testing Machines 132 6.4.4 Using Mapping Information 134 7 Examples 138 7.1 Simple Example 139 7.1.1 Simple Example 1 139 7.1.2 Hidden Weighted Bit 141 7.1.3 Carry-save Adder 143 7.2 B8ZS Encoder/Decoder 143 7.2.1 Description of Circuit 144 7.2.2 Verification .144 7.3 Multipliers 147 7.3.1 Preliminary Work 147 7.3.2 IEEE Floating Point Multiplier 148 7.3.3 IFIPWG10.5 Benchmark Example 149 7.3.4 Other Multiplier Verification 160 7.4 Matrix Multiplier 162 7.4.1 Specification 162 7.4.2 Implementation 164 7.4.3 Verification 168 7.4.4 Analysis and Comments 175 7.5 Single Pulser 177 7.5.1 The Problem 177 7.5.2 An Example Composite Compositional Rule 178 7.5.3 Application to Single Pulser 179 7.6 Evaluation 180 8 Conclusion 183 8.1 Summary of Research Findings 183 8.1.1 Lattice-based Models and the Quaternary logic Q 183 8.1.2 The Temporal Logic TL 184 8.1.3 Symbolic Trajectory Evaluation 185 8.1.4 Compositional Theory 185 8.2 Future Research 186 v 8.2.1 Non-determinism 187 8.2.2 Completeness and Model Synthesis 187 8.2.3 Improving STE Algorithms 188 8.2.4 Other Model Checking Algorithms 189 8.2.5 Tool Development 190 Bibliography 192 A Proofs 202 A. 1 Proof of Properties of T L 202 A. 1.1 Proof of Lemma 3.3 202 A.1.2 Proof of Theorem 3.5 203 A.1.3 Proof of Lemma 3.6 207 A.1.4 Proof of Lemma 3.7 208 A.2 Proofs of Properties of STE 209 A.2.1 Proof of Lemma 4.4 215 A. 2.2 Proof of Theorem 4.5 216 A. 3 Proofs of Compositional Rules for T L n 217 B Detail of testing machines 222 B. l Structural Composition 222 B. l . l Composition of Models 223 B.1.2 Composition of Circuit Models 226 B.2 Mathematical Preliminaries for Testing Machines 231 B.3 Building Blocks 231 B. 4 Model Checking 234 C Program listing 238 C l FL Code for Simple Example 1 238 C. 2 FL Code for Hidden Weighted Bit 239 C.3 FL Code for Carry-Save Adder 240 C.4 F L Code for Multiplier , , 240 C.5 F L Code for Matrix Multiplier Proof 243 Index 252 vi List of Figures 1.1 Example Lattice State Space 9 1.2 The Partial Order for C 10 3.1 Inverter Circuit 43 3.2 Inverter Model Structure — Flat State Space 44 3.3 Lattice-based Model Structure 44 3.4 Non-deterministic state relation 46 3.5 Lattice State Space and Transition Function 46 3.6 The Bilattice Q 48 3.7 Definition of g and h 53 4.1 ThePreorder \ZV 74 5.1 Example 92 5.2 Two Cascaded Carry-Save Adders 114 6.1 An FL Data Type Representing Integers 121 6.2 Data Representation 121 6.3 A CSA Adder 135 7.1 Simple Example 1 140 7.2 Circuit for the 8-bit Hidden Weighted Bit Problem 141 7.3 B8ZS Encoder 145 7.4 Base Module for Multiplier 150 7.5 Schematic of Multiplier 152 7.6 Black Box View of 2Syst 163 7.7 Cell Representation 166 7.8 Implementation of Cell 167 7.9 Systolic Array 168 7.10 Single Pulser 177 B . l BBA{g, 3): a Three Delay-Slot Combiner 232 B.2 BBB(g,4) 234 vii List of Tables 3.1 Conjunction, Disjunction and Negation Operators for Q 49 5.2 Summary of TL„ Inference Rules 107 7.3 CSA Verification: Experimental Results 143 7.4 Benchmark 17: Correspondence Between Integer and Bit Nodes 154 7.5 Verification Times for Benchmark 17 Multiplier 159 7.6 Inputs for the 2Syst Circuit 164 7.7 Outputs ofthe2Syst Circuit 165 7.8 Benchmark 22: Actual Output Times 175 viii List of Important Definitions, Theorems and Lemmas Definition 3.1 — 53 Definition 3.2 53 Definition 3.3 54 Definition 3.6 55 Definition 3.7 56 Definition 3.8 57 Definition 3.10 61 Definition 3.11 62 Definition 3.12 64 Definition 3.13 67 Lemma 3.1 50 Lemma 3.2 51 Lemma 3.3 54 Lemma 3.4 58 Theorem 3.5 59 Lemma 3.7 65 Definition 4.1 71 Definition 4.2 71 Definition 4.4 72 Definition 4.7 78 Definition 4.9 82 Definition 4.12 84 Definition 4.14 85 Definition 4.15 85 Definition 4.18 88 Definition 4.19 88 Theorem 4.2 74 Lemma 4.3 79 Lemma 4.4 82 Theorem 4.5 83 Lemma 4.8 89 Lemma 5.2 93 Theorem 5.3 94 Theorem 5.4 94 ix Theorem 5.5 95 Lemma 5.6 96 Theorem 5.7 97 Theorem 5.8 98 Lemma 5.9 101 Lemma 5.10 102 Theorem 5.11 103 Theorem 5.12 103 Corollary 5.13 103 Theorem 5.14 105 Theorem 5.15 105 Theorem 5.16 105 Theorem 5.17 105 Theorem 5.18 105 Theorem 5.19 105 Theorem 5.20 105 Theorem 5.21 105 Lemma 5.22 105 Corollary 5.23 106 Lemma 5.24 107 Lemma 5.27 108 Lemma 5.28 108 Theorem 5.31 I l l Lemma A.6 210 LemmaA.ll 217 Theorem A. 12 217 Theorem A. 13 217 Theorem A. 14 218 Lemma A. 15 218 Theorem A. 16 219 Theorem A. 17 219 Lemma A. 18 220 Lemma A. 19 220 Theorem A.20 221 Theorem A.21 221 Definition B . l 223 Lemma B . l 224 Lemma B.2 225 x Lemma B.3 229 xi Acknowledgement 'A person is a person through other people.' The person I asked to proofread this said it was too sentimental. Perhaps it is, but the experience and time spent on this thesis has been a very important part of my life, and it is important for me to thank some of the many people who have contributed to this experience. The Computer Science Department and the Integrated Systems Laboratory at UBC have been a wonderful place to work. First, I must thank my supervisor, Carl Seger. Carl has sup-ported and mentored me since I arrived at UBC, and I am very grateful for all that he has done, well beyond what I could have expected. Carl, I have learned a tremendous amount from you. I also thank the other members of my supervisory committee: Paul Gilmore, Nick Pip-penger, Son Vuong, and particularly Mark Greenstreet who has provided enthusiasm, ideas, and support and given freely of his time. Thanks to all members of the ISD group, especially Mark Aaagaard, Nancy Day, and Mike Donat, Catherine Leung, Helene Wong and most especially Andy Martin. There are many others in the department who have been important too. My office-mates, Carol Saunders, Catherine Leung, Nana Kender, and Rita Dilek, have been supportive. Helene Wong, Marcelo Walter, Peter Smith, Jim Boritz and Alistair Veitch have all made being here a richer experience. The administrative and technical staff and management of the department have provided an environment that was a pleasure to work in. Financial support from UBC, NSERC and ASI made my research possible. For the last two years, Green College has not only been a stimulating academic environment but also a great community to be part of, and I am very fortunate to have had this experience. There are many College members who have helped me over the years. There are two people whose friendship I shall always count as major accomplishments at UBC. I am very grateful to Patricia Yuen-Wan Lin for sharing the journey and giving me constant encouragement. Peter Urmetzer has been a very good friend throughout. Thank you for all you have done. Many others helped get me here and have supported me since, and I would like to thank all my friends. I wish to particularly mention Saul Johnson and Annette Wozniak (and Kiah); Conrad Mueller, Ian Sanders, Philip Machanick and all the other Turing Tipplers; Sheila Rock; Stan Szpakowicz; Anthony Low; Georgia and Hugh Humphries. Finally, my greatest thanks go to my family, my parents, David and Ethel, and sister, Jo Ann. The encouragement and support you have always given me, in many different ways over a long period of time, has always encouraged and sustained me. This experience and what I have learned here is due to your efforts. xii To my parents and the memory of Maurice Yatt and Edith Hazelhurst Woza Moya omuhle Sibambane ngezandla sibemunye — Work For All , Juluka, 1983 xiii Chapter 1 Introduction 1.1 Motivation As computers become ubiquitous in our society, as more parts of our global society are affected directly and indirectly by computers, the need to ensure their safe and correct behaviour in-creases. The hyperbole encountered in the media tends to make people blase about the impor-tance of computers and undervalue the revolutionary effect that computers have had. But, as our dependency on computers increases, so does the complexity of computer systems, making it more difficult to design and build correct systems at the same time as it becomes more impor-tant to do so. What we can do 'sort o f right far exceeds what we can do properly. As a scientific and engineering discipline computer science is intimately concerned about making predictions about and knowing the properties of computer systems, and it is here that mathematics and the application of methods of formal mathematics is critical. Traditional methods of ensuring correct operation of software and hardware are often not able to provide a sufficiently high degree of confidence of correctness. Methods such as testing and simulation of systems cannot hope to provide anywhere near exhaustive coverage of system behaviour, and while sophisticated test generation techniques exist, the sheer size of systems makes testing more and more difficult and expensive. Verification — a mathematical proof of the correctness of a design or implementation — uses formal methods to obviate these problems. Questions of verification have been at the heart of computer science since the work of Turing and others [124], and the fundamental limits of 1 Chapter 1. Introduction 2 computation (questions such as computability, tractability and completeness) are of immense consequence when discussing the theoretical and practical limitations of verification. The the-oretical importance of verification is reflected in the practical consequences of verification, or lack thereof, which has been illustrated recently by the extremely well-publicised error in a com-mercial microprocessor [64, 104, 110]. This is not to suggest that formal methods are a panacea, and that other approaches are unim-portant. Indeed, in many safety critical or other important applications, there may be social and ethical constraints on what should be built. There are many technical and non-technical factors that will affect the quality of systems that are built. Testing at different levels will continue to be important. Moreover, there are limitations on what verification can offer. With respect to hardware verification, Cohn points out that neither the actual hardware implementation nor the intentions motivating the device can be subject to formal methods [42]. Verification is inherently limited by the models used. And, verification is expensive computationally and requires a high level of expertise. Although there has been some success in the use of formal methods, there are a number of practical and organisational problems that must be dealt with, especially when formal methods are first used by an organisation [114, 119]. Over a quarter of a century ago, C.A.R. Hoare summed up his view of the use of formal methods [82]: The practice of supplying proofs for nontrivial programs will not become widespread until considerably more powerful proof techniques become available, and even then will not be easy. But the practical advantages of program proving will eventually outweigh the difficulties, in view of the increasing costs of programming error. At present, the method which a programmer uses to convince himself of the correct-ness of his program is to try it out in particular cases and to modify it if the results produced do not respond to his intentions. After he has found a reasonably wide variety of example cases on which the program seems to work, he believes that it will always work. The time spent in this program testing is often more than half the Chapter 1. Introduction 3 time spent on the entire programming project; and with a realistic costing of ma-chine time, two thirds (or more) of the cost of the project is involved in removing errors during this phase. The cost of removing errors discovered after a program has gone into use is often greater, particularly in the case of items of computer manufacturer's software for which a large part of the expense is borne by the user. And finally the cost of er-ror in certain types of program may be almost incalculable — a lost spacecraft, a collapsed building, a crashed aeroplane, or a world war. Thus, the practice of pro-gram proving is not only a theoretical pursuit, followed in the interests of academic respectability, but a serious recommendation for the reduction of costs associated with programming error. As a manifesto for verification, with minor changes it might well have been written today. On the surface, re-reading this may seem to be cause for pessimism — what has changed in 25 years? However, this is misleading. Verification is very difficult and can be extremely expen-sive1 ; this complexity, lack of expertise, and conservatism are problems in the greater adoption of formal methods. But, the cost of not performing verification can be much higher2, and as will be seen in Chapter 2, there have been significant theoretical and practical advances show-ing that the promise of advantages from formal methods has been realised. The progress that has been made, the increased needs for the use of verification, and the challenges which these needs create, make the comments expressed in this extract more relevant today than it was in 1969: we need more powerful proof techniques, and techniques that are easier to use. The rest of this chapter is structured as follows. Section 1.2 introduces the use of verification and formal methods. Section 1.3 motivates and describes the underlying approach to verifica-tion adopted in this thesis. Section 1.4 describes the research contribution of the thesis, and Section 1.5 outlines the rest of this thesis. 1Owre et al. estimate that the cost of a partial formal specification and verification of a commercial, 500 000 transistor microprocessor as 'three man-years' of work [105]. 2 Intel estimate the cost of the flaw in the Pentium microprocessor at US$475-million [65]. Chapter 1. Introduction 4 1.2 Verification and the Use of Formal Methods Consider an example of a chip which divides two 64-bit numbers. There are 2128 possible com-binations of input. Exhaustive testing of all these combinations is an impossible feat — even if we were to test 109 combinations a nano-second for a million millenia we would be able to test fewer than one per cent of cases. Moreover, this testing would ignore the possible effects of internal state of the chip (it could be that the chip works correctly when initialised, but that the effect of computing some answers updates internal registers so that subsequent computations are incorrect). This example illustrates the underlying problem in checking for correctness. The number of behaviours of a system, particularly if it is reactive or concurrent, is very large. Not only does this make exhaustive testing impractical, it makes reasoning about computer systems, whether software or hardware, difficult. Since testing often cannot be comprehensive, verification is appealing in giving a higher confidence in the correctness of systems. The use of formal methods allows a mathematical proof to be given of correctness. Of course, we can only verify what can be modelled mathe-matically. The verification of the correctness of a chip is the verification of its logical design. We have some mathematical model of the behaviour of the components (gates or transistors) and use this to infer properties of the system. Such a verification is only as good as the model of the components. Models like this must make simplifications about the physical world. While often the simplifications made do not affect our ability to make predictions about the behaviour of the world, it is important to realise the potential problem. The question of how good the model of the world is, and the problem of realising a logical design as a physical artifact are critical problems. However, they are beyond the scope of this thesis. Focussing on the problem of verifying a logical design is difficult enough, and this will be the focus of this research: this section introduces verification and some of the research problems Chapter 1. Introduction 5 associated with verification, and Chapter 2 will give a fuller survey of verification. Verification requires that both the specification and the implementation be described using some mathematical notation with a well-defined formal semantics. There are many choices open to the verifier. Common choices for describing an implementation are finite state machines or labelled transition systems — often these are extracted directly from higher level descriptions such as programs. A common choice for the specification is a temporal logic, which allows the description of the intended behaviour of a system over time. If the implementation is described as a finite state machine and the specification as a set of temporal formulas, verification consists of showing that the finite state machine satisfies these formulas. The fundamental problem with verification is that the number of states in a model of a sys-tem is exponentially related to the number of system components; this is known as the state explosion problem. Finding automatic verification techniques is difficult; the general versions of the problem are undecidable [124] and restricted versions remain undecidable, while others are NP-hard [55]. Many verification approaches have been suggested — these will be surveyed in the next chapter. The problems caused by large state spaces manifest themselves in different ways, as can be seen with two of the most popular methods, theorem proving and automatic model check-ing. A large state space imposes significant computational costs on the verification task. This is a particular problem for automatic model checking techniques, which are based on state explo-ration methods. Although theorem provers may be less sensitive to the size of the state space in terms of their computational cost, the cost of human intervention is high, often requiring a high degree of expertise and making the verification more difficult and much more lengthy. Dealing with the state explosion problem motivates much research in verification, and a number of methods to limit the problem have been suggested. Some of the methods examined in this research are: Chapter 1. Introduction 6 • The use of good data structures to represent model behaviour is critical. The develop-ment and use of Ordered Binary Decision Diagrams in the 1980s was very important in extending the power of automatic verification methods. • Abstraction. By constructing an abstraction of the model, and proving properties of the abstraction rather than the model, significant performance benefits may be gained. Of course the problem of finding the abstraction, and showing that the properties proved of the abstraction are meaningful of the model are non-trivial. • Compositionality. Divide and conquer is one of the most common strategies in computer science, and one which can be very helpful with verification. Property decomposition is useful when the cost of verification is highly sensitive to the complexity of the properties to be proved; it provides a way of combining 'smaller' results into 'larger' ones. Struc-tural decomposition allows different parts of the system to be reasoned about separately; these separate results are then used to deduce properties of the entire system. • Hybrid approaches. Different verification techniques have different advantages and dis-advantages, so by combining different approaches it might be possible to overcome the individual disadvantages. The choice of model of the system is critical. This choice affects the way in which properties are proved, what satisfaction means, and how abstraction and compositionality can be used. The next section motivates and describes the method of representing state space and model be-haviour adopted by this thesis. 1.3 Partially-ordered State Spaces One of the starting points of this thesis is that partially-ordered sets are effective representa-tions of state spaces of systems. This section introduces the necessary mathematical definitions, Chapter 1. Introduction 7 motivates why partial orders are useful representations, describes how they are used, and then introduces an appropriate verification method. 1.3.1 Mathematical Definitions A partial order, R, on a set S is a reflexive, anti-symmetric and transitive relation on S, i.e. R C S x S and Typically, an infix notation is used for partial orders. Thus, if rz is a partial order, then x Q y is used for ( x , y ) € rz . A preorder on 5 is a reflexive and transitive relation. If rz is a partial order on S, then it can be extended to cross-products of S and sequences of <S. If ( s i , . . . ,s„), . . . , < „ ) G S", then (s l 5 . . . ,sn) [Z . . . ,<N) if st- C ^ , for i = 1,... , n. Similarly for sequences (elements of S"), S 1 S 2 S 3 . . . C ^ i^2^3 • • • if s,- E for i = 1 , 2 , . . . If <S* is a set with partial order [Z , and s,t £ S, then u is the /easr upper bound, or join, of 5 and i if s, t C u (i.e. it is an upper bound) and if s, t FZ u, then u FZ u (i.e. it is no larger than any other upper bound). In this thesis, the join of s and t will be denoted s U i . In general, it is not the case that every pair of elements in a partially ordered set has a join — a pair of elements could have many least upper bounds, each of which is incommensurable with the others, or no least upper bound at all. Similarly, the greatest lower bound of s and t, or the meet of s and t, is denoted sHt, and in general not all pairs of elements will have a meet. A partially ordered set S is a lattice if every pair of elements has a meet and join. By in-duction, in any lattice any finite subset has a least upper bound and a greatest lower bound. S is a complete lattice if every set of elements — finite or infinite — has a least upper bound and Vs G 5,(s,s) € R Chapter 1. Introduction 8 greatest lower bound. In particular, complete lattices have unique universal upper and lower bounds. Note that all finite lattices are complete — this is a result that is used extensively in this thesis. If S is a complete lattice over the partial order C , then, under the natural extensions of C : • a finite cross-product of S, Sn is a complete lattice; and • S", the set of all sequences of S is a complete lattice. If <Si and S2 are two lattices with partial orders < i and < 2 , and g: S\ -> S2 is a function, then g is monotonic with respect to <i and < 2 if ^ <i t implies that g(s) < 2 g(i). If <S is a lattice and A C S, then A is upward closed if a E A , x G <S* and a • a; implies that i G A Similarly, A is downward closed if a E A , x £ S and £ C a implies that x £ A . Partial orders are used in two important ways in this thesis. First, given a state space, par-tial orders are used to compare the information content of states. sC.t implies that s has less information than t; if s Q t and sQu, then informally we can think of s as representing both t and u, it is an abstraction of these two states. It is fairly easy to generate partial order models of systems like circuits from gate level descriptions of circuits, and good partial-order models from switch-level can automatically be extracted in many cases. The second way partial orders are used is to differentiate between levels of truth, a central theme in this thesis. 1.3.2 Using Partial Orders Formally, a model can be described by ((S, C ), Y), where S is a complete lattice under the partial order C and the behaviour of the model is represented by the next-state function Y : S —t S which is monotonic with respect to the partial order. The partial order can be extended to sequences of <S. To see why partial orders might be useful, consider as an example of a system which can be in one of five states. A next state function Y describes the behaviour of the system. The state Chapter 1. Introduction 9 space could be represented by a set containing five elements. However, there is an advantage in representing the state space with a more sophisticated mathematical structure. In this example, we represent the state space with the lattice shown in Figure 1.1 (note that this is just one possible lattice). States s4-s8 are the 'real' states of the system, and the other states are mathematical abstractions (Y can be extended to operate on all states of the lattice). The partial ordering of the lattice is an information ordering: the higher up in the ordering we are, the more we know about which state the system is in. For example, the model being in state s i corresponds to the system being in state s4 or s 5 . State s 9 represents a state that has contradictory information. so Figure 1.1: Example Lattice State Space States like si are useful because if one can prove that a property holds of state si, then (given the right logical framework) that property also holds of s4 and s5. There can be a great perfor-mance advantage in proving properties of states low in the lattice. Furthermore, state s 9 plays an important role, since it represents states about which incon-sistent information is known. Although such states do not occur in 'reality', they are sometimes artifacts of a verification process. A human verifier may introduce conditions which are inconsistent with each other or the op-eration of the real system. These conditions could lead to worthless verification results — ones that while mathematically valid tell us nothing about the behaviour of the system and may give Chapter 1. Introduction 10 verifiers a false sense of security.3 Since it may not be possible to detect these inconsistencies directly, it is useful to have states in which inconsistent properties can hold at the same time. In such states, a property and its negation may both hold, and we should have a way of expressing this. In this example, the potential savings are not large, but for circuit models extremely signif-icant savings can be made. The state space for a circuit model represents the values that the nodes in the circuit take on, and the next state function can be represented implicitly by sym-bolic simulation of the circuit. The nodes in a circuit take on high (H) and low (L) voltage values; there is a natural lattice in which these voltage values can be embedded. It is useful, both computationally and mathe-matically, to allow nodes to take on unknown (U) and inconsistent or over-defined (Z) values. The set C = {U, L, H, Z} forms a lattice, the partial order given in Figure 1.2. Z H / U Figure 1.2: The Partial Order for C The state space for a circuit then is naturally represented by Cn, which is a complete lattice. Consider a circuit with n components and a state, s, of the circuit: s = (vu... , u O T , U , . . . , U), n—m where the u,-s are boolean values. With the right logical framework, if we can prove that a prop-erty g holds of the state s, then we can infer directly that the property holds for all states above it in the information ordering. 3 'A truth that's told with bad intent,/ Beats all the lies you can invent.'— William Blake Chapter 1. Introduction 11 If we only consider the subset of states, {L, H}" (those states with known, consistent volt-ages on each component), there are 2 n _ m states above s, of the form (vi,... ,vn), where the ViS are boolean variables. So, in one step 2 n _ m 'interesting' proofs are done (this step would also prove properties about states with partial or inconsistent information). Through the judi-cious use of U values, the number of boolean variables needed to describe the behaviour of the circuit can be minimised, increasing the size of the circuits that can be dealt with directly. The purpose of model checking is to determine whether a model has a certain property — ideally, a verification method should answer this 'yes' or 'no'. Unfortunately, the performance benefit gained by using only partial information compromises this goal. In the example above, while every property of the circuit will be true or false of states s4-s8, there will be some proper-ties which are neither true nor false of states s0-s3, since there is insufficient information about those states. The converse problem exists with a state like s 9. Assigning the same level of credibility and meaningfulness to the truth of property g in state s9 as the truth of g in s5 violates common sense understanding of truth. Both these factors indicate that a two valued logic has insufficient expressiveness when deal-ing with a partially-ordered state space. To say that something is true or false in states like si and s9 may be very misleading. And, we shall see later that a two valued logic also has a serious technical defect in this situation. 1.3.3 Symbolic Trajectory Evaluation Symbolic trajectory evaluation (STE) is a model checking approach based on partially ordered state spaces. STE computes the next state relation using symbolic simulation. Not only does this allow the partially-ordered state space structure to be exploited effectively, it supports accu-rate, low-level models of circuit structures. ('Approach' is emphasised above because a number Chapter 1. Introduction 12 of different possible STE-based algorithms and implementations exist. Moreover, STE-based algorithms can be used in different logical frameworks.) Previous work with STE has shown that it is an effective method for many circuits (e.g., see [8, 47]) and it is recognised as one of the few methods with good asymptotic performance on a large class of non-trivial circuits [26]. STE is particularly useful in dealing with large circuits, where the circuit is modelled at a low level (gate or switch level), and where timing is important. Higher-level verifications are important too, but, as Cohn points out, realistic and detailed models of circuits are important to ensure that the mathematical results proved are meaningful [42]. Although successfully applied, these STE-based approaches are not without their problems. First, the underlying problem of the state explosion problem still exists, and as with all verifi-cation methods, better and more powerful techniques must be developed as the computation bottle-necks are still there. Second, in existing STE-based approaches, the logic used to ex-press properties is limited; for example, disjunction and negation is not fully supported. While the logic is expressive enough for many problems and the restricted form of the logic leads to very efficient model checking algorithms, there are problems which need a richer logic. Third, previous approaches have used a two valued logic, which, in the context of partially ordered state spaces, is confusing. For a restricted logic, the complication caused by insufficient and contradictory information can be dealt with adequately in an extra-logical way; this is inade-quate for a richer logic. 1.4 Research Contributions No one verification method is suitable for all verification problems. The choice of model (how complete and what level), how correctness is expressed, and the choice of underlying theoretical framework and practical tool depends on many factors: the problem domain; what properties Chapter 1. Introduction 13 the verifier wishes to prove; the expertise of those involved; what level of confidence in the verification has to be obtained; and very importantly the computational and human resources available. The research work presented in this thesis is motivated by the promise that trajectory evalua-tion offers in dealing with large circuits, especially where a detailed model of timing is required. The strength of this is that not only can the high-level algorithmic descriptions of functional-ity be verified, but the low-level implementation details can be checked too; verification can be done on switch-level or detailed gate level circuit descriptions. This is particularly impor-tant when the transformation from high-level description to low-level implementation is error-prone. Timing properties can be verified at the micro-level (e.g. checking that circuit values stabilise by the end of a clock cycle) or at the macro-level (e.g. checking which clock cycle something happens). Since many other verification methodologies have difficulty with detailed verification of large circuits, this is an important line of research to develop. This thesis starts from the premiss that extending the power of STE-based methods by increasing the range and size of systems that can be verified, and the types of properties that can be expressed in a veri-fication is a significant contribution. The goal of this research is to show that the applicability of symbolic trajectory evalua-tion can be significantly extended though the development of an appropriate temporal logic for model checking partially-ordered state spaces, and the use of a compositional theory for trajec-tory evaluation. The specific contributions of this thesis are listed below. • Proposing a suitable temporal logic suitable for partially ordered state spaces. Traditional two-valued logics are unsuitable for expressing properties of partially-ordered state spaces; my first thesis is that a four-valued logic is suitable. This logic distinguishes the following four cases: true, false, under-determined, and over-determined. Not only does the four-valued logic provide a framework for representing our knowledge of the Chapter 1. Introduction 14 degree of truth of a proposition, it is a suitable technical framework. This framework is useful not only for STE-based model checking algorithms, but other verification methods based on partially-ordered state spaces developed in the future. A qualification is in order — the use of uncertainty to model both state information and degrees of truth is epistemological rather than ontological in nature. The question of un-certainty in the 'real world' is well outside of the scope of this thesis. Uncertainty is used in system models because this offers significant computational advantages. This uncer-tainty in the model induces an uncertainty in our knowledge of the model. Thus the four-valued logic is useful to reason about our knowledge of the model, and the use of the four valued logic is proposed for its utilitarian value, not as an excursion into general philos-ophy. • Generalisation of symbolic trajectory evaluation based algorithms My second thesis is that using the four-valued framework, STE-based algorithms can be generalised to support a richer logic. Providing a richer logic is important because it sup-ports the verification of a greater range of applications. Moreover, it often makes the spec-ification of properties clearer, which makes the verification more meaningful for the user; and more elegant specifications can also lead to more efficient model checking. • A Compositional Theory My third thesis is that a compositional theory for model checking partially-ordered state spaces can be developed, and provides a foundation for overcoming the performance lim-itations of model checking. The compositional theory allows verification results to be combined in different ways into larger results. The structure of the state space lends itself to the compositional theory, and together with the compositional theory allows very large state spaces to be model checked. The key part of the development of the compositional theory is to show that it is sound; that all results inferred could, at least in principle, be Chapter 1. Introduction 15 directly obtained through trajectory evaluation or some other model checking algorithm. • Development of a practical tool While the proposed four-valued logic and compositional theory have theoretical interest, a major part of the significance of the contribution of the work comes from my fourth thesis: that generalised STE and the compositional theory for model checking partially ordered state spaces using a four-valued logic can be used effectively, making a significant contribution to the size and complexity of circuits that can be verified. This is demonstrated by the development of prototype verification tools. These proto-types show that it is effective to combine theorem proving and STE-based model-checking as these approaches complement each other. While the prototypes are not of interest in themselves, they demonstrate that very large circuits can be formally verified using the approaches advocated here. The prototypes are also of interest because of the lessons they provide about tool-making. 1.5 Outline of Thesis The rest of the thesis is structured as follows. Chapter 2 gives a brief overview of verification, and then reviews related literature. This raises the important issues and problems of verification, motivates choices made in this research, and places the research into context. Chapter 3 presents the four-valued logic Q, and the temporal logic, TL, based on Q. After defining the logics, the issue of satisfaction — what it means to say that a certain property holds of a state or sequence of states — is explored and different alternatives given. The theory of generalised symbolic trajectory evaluation is given in Chapter 4 using the theory presented in Chapter 3. Although the theory of trajectory evaluation is general, at this stage its major application area is circuit verification. Chapter 1. Introduction 16 Chapter 5 develops the theory of composition for the verification of partially-ordered state spaces. The compositional inference rules are explained, and shown to be sound. Simple exam-ples are given. The compositional theory is very important in increasing the range of systems that can be verified using trajectory evaluation. Chapter 6 ties the preceding chapters together and shows how the theory can be practically implemented. Issues of data and state representation are discussed, practical model checking algorithms based on STE outlined, as well as how the verification style of model checkers and theorem provers can be combined. Chapter 7 is devoted to example verifications. A few simple verification examples are given to show the style of verification, and then some large verification examples are given. This chap-ter shows that the methodology proposed here can be effectively implemented. Chapter 8 is a conclusion, and the appendix contains some of the more technical proofs and example programs. A Guide to the Reader This thesis contains many definitions, theorems and a significant level of mathematical nota-tion. A reader may find the index at the end of the thesis and the List of Important Definitions, Theorems and Lemmas starting at page ix useful in finding cross-references. The nature of the research requires that the thesis contain many proofs. Many of these proofs are highly technical and uninteresting in themselves; this does not make for the easiest or most captivating reading, for which I apologise. I have tried to make the exposition of proofs as clear as possible, and have adopted the following convention for proofs. Each step in the proof con-tains three parts, laid out in three columns: a label, a claim, and a justification. The justification may refer to previous steps in the proof using the labels given. Chapter 1. Introduction 17 Lemma 1.1 (Example). If S is a complete lattice under the partial order C , and g : S -» <S is monotonic with respect to C , then for all s,t £ S, g(s) U #(*) C 5(5 U t). Proof. (1) Definition of join. (2) tQsUt Definition of join. (3) g(s)Qg(sUt) From (1) by monotonicity of g (4) g(t)Qg{sUt) From (2) by monotonicity of g (5) g(s)Ug(t)Og(sL\t) From (3), (4) by property of join Chapter 2 Issues in Verification This chapter is intended to place the thesis work in perspective and relate the research to other work. It is not intended as a comprehensive survey of verification, and therefore some simpli-fications are made and important verification methods skimmed over. For fuller surveys on the topic see [73,97, 119]. Overview of Chapter Section 2.1 briefly introduces a method of representing boolean functions. Since boolean ex-pressions are used extensively in verification for a variety of purposes, efficient methods for representing and manipulating them is essential. The review of verification starts with Section 2.2, which introduces two of the main styles of verification. In one style, verifying a model means checking whether the model has certain properties. In the other method, two models are compared to see whether a certain relationship holds between the models (for example whether they have equivalent observable behaviour). For each of the styles of verification, there are a number of possible verification techniques. Section 2.3 gives a brief overview of some of the large number of verification techniques, dif-fering in approach and detail, that have been proposed. Section 2.4 examines one of these proof techniques in more detail; the method of symbolic trajectory evaluation proposed by Bryant and Seger forms the basis of this thesis. Due to the computational complexity of verification, all these methods have limitations, and 18 Chapter 2. Issues in Verification 19 research continues in trying to improve upon them. Much of this research in improving verifica-tion techniques deals with the search for better algorithms and data structures. Although this is very important, the underlying complexity limitations indicate that something more is needed. Two of the most promising lines of research in this regard have been the work in composition-ality and abstraction. They are discussed in Sections 2.5 and 2.6. Section 2.7 concludes the review with a brief discussion of the issues raised in this chapter. 2.1 Binary Decision Diagrams Many verification techniques — including STE — represent boolean expressions with a data structure called (ordered) Binary Decision Diagrams (BDDs). BDDs are a compact, canonical method for manipulation of boolean expressions [22]. A BDD is a directed, acyclic graph, with internal vertices representing the variables appearing in the expression. BDDs are ordered in the sense that on all paths in the graph, variables appear in the same order. Using this representation, operations such as conjunction, negation and quantification and equivalence testing can be efficiently implemented. Boolean expressions are used to represent state information and truth of propositions, so it is critical that they can be manipulated efficiently. BDD-based approaches have been extremely successful. Unfortunately, although BDDs are a very compact representation, there are things that cannot be represented efficiently. This is not surprising; the satisfaction problem [63] can be represented and solved using BDDs so if a BDD representation polynomial in size could be constructed in polynomial time, this would imply that P=NP. Some arithmetic problems cannot be represented efficiently. For example, multiplication of two integers (represented as bit-vectors) requires BDDs exponential in size. For a discussion of the limitations of BDDs, see [21]. Chapter 2. Issues in Verification 20 One of the critical issues when BDDs are used is the ordering of variables used in the con-struction of the graph. The size of the resulting BDD may be highly dependent on the variable ordering, so it is vital that a good variable ordering is used. In general human intervention is needed to determine a good ordering, but fortunately for many real problems a good variable ordering can be found, and heuristics for dynamic variable ordering can be applied successfully. So, while the need to find good variable orderings is an issue, it is not a fundamental problem with BDD-based methods. Although BDD-based approaches are very successful, there are a number of other successful approaches that do not use BDDs. Some of these are described below. The success of BDDs has also motivated research on other data structures for representing data that BDDs cannot represent efficiently (these are mentioned below too). 2.2 Styles of Verification To say that a program or circuit is verified is to say that there is a proof that certain mathemat-ical statements are true of a model of that system. This section looks at the different ways of expressing these statements, while Section 2.3 looks at how the statements are proved to hold. Section 2.2.1 introduces the property checking approach. The idea here is that there is a formal language for expressing properties of the system, and verification consists of proving that these properties hold. Section 2.2.2 leads on from this by introducing modal and temporal logics; these are logics that are commonly used to express properties of interest. This is the style of verification adopted in this thesis. Section 2.2.3 introduces the other style of verification, model comparison. Here, two models of the program or circuit are expressed formally (typically, a specification and an implementa-tion), and verification consists of proving that the two models are equivalent. Chapter 2. Issues in Verification 21 2.2.1 Property Checking One major approach to verification is to determine whether a description of a program1 has (or does not have) a set of properties. Turing showed that this problem — one of the foundational problems in computer science — is, in general, undecidable: for example, there is no general method for determining whether a program halts, or whether it prints out a zero [124]. One of the landmarks in program verification was the development of the Floyd-Hoare logic used to describe the behaviour of sequential programs (introduced in [82], and see [67] for a good introduction). In this logic, verification results are written in the form {A}P{C}, where A and C are logical formulas and P is the program segment. This Hoare triple says that if A holds when P starts executing, then if P completes then C will hold. There have been many other approaches used to describe the behaviour of sequential (see [7] as a good example of this style of proof) and concurrent programs (see [96] for an example). This style of verification can be used for small programs, and can be appropriate for small, complicated algorithms. However, on a larger scale it is not useful as it is just too tedious to use, especially for hardware systems. There are many ways of expressing properties. For example, one approach has been to perform reachability analysis on the program (e.g. discovering whether there are any deadlock states). Another approach — the one adopted in this research — is to use some form of logic to express properties. Often modal or temporal logics are used for reactive systems. 2.2.2 Modal and Temporal Logics Modal logics are systems of logic for describing and reasoning about contingent truths. The type of modal and temporal logics of interest here are used to describe the behaviour of systems 1 As the term 'system' can be ambiguous since it can refer to the system being verified, or the tool performing verification, the term 'program' is used in a generic sense to describe the system being verified, whether or not the system is represented as a program, a finite state machine, a netlist etc. Chapter 2. Issues in Verification 22 that have dynamic or evolving structure. Many of these logics have been proposed; see the works of Galton [62], Emerson [52] and Stirling [120, 121] for overviews. Typically, a set of formulas of the logic form the specification of the program being verified. The verification task is to test whether the mathematical structure representing the program sat-isfies this set of formulas. The wide variety of modal logics reflects both the wide variety of application and complex-ity of the topic. Modal logics and the mathematical structures over which they are interpreted differ in expressiveness. Issues such as non-determinism and the ability to express recurring properties greatly affect issues such as usefulness, decidability and computational complexity. Temporal logics are particularly useful in verification. They can be used to specify the be-haviour of a system over time. Time can be a 'real' time, or some abstraction thereof; and can also be modelled as continuous or discrete. The method proposed in this thesis has the advan-tage of being able to model time fairly accurately. The most powerful logic of interest here is the family of modal /i-calculi, variously attributed to Park and Kozen. The expressive power of the //-calculi, determined by the modal operators available, have a marked effect on the decidability of logic: for example, the linear time modal //-calculus is decidable, while the branching time modal //-calculus is not [55]. Other modal and temporal logics are restricted versions of the //-calculus. CTL*, CTL, LTL, and the Hennessy-Milner logic are good examples of logics which can be encoded within a ver-sion of the //-calculus. There are a number of ways in which temporal logics can be classified (see [52] for details). The most important question is whether the logic is branching time or linear time (see [53] for some discussion of this). Chapter 2. Issues in Verification 23 2.2.3 Model Comparison In this approach, two models or descriptions are compared to see whether a given relationship holds between them. The most important relationship is equivalence, but there are other useful relationships. One way of using this form of checking is for one description to be a specifi-cation of a system and the other description to be an implementation. Showing that a formal relationship holds between the two descriptions shows that the implementation is correct. General versions of this problem are undecidable. Turing machine equivalence is the best example, and these decidability results apply to popular methods such as process algebras (as CCS can encode Turing machines this has direct relevance to much work in this area). However, there are restricted, useful versions of the problem which are decidable (see, for example [32]). There are a number of different ways in which models can be represented. What equivalence and more general types of relationships mean and how they are checked depends very heavily on this. Three of the main approaches are: 1. Process algebras such as CCS [102] and CSP [20]. There are many different types of equivalence which depend on how fine-grained an equivalence is desired (see [102] for a discussion of this). There are a number of other relationships which are denned as preorders on processes. These can be used to define correct implementations of specifications. See [80] for ex-amples. A good example of this approach is the LOTOS specification language which is based on CCS and CSP [15]. Equivalence and implementation relationships can be used to show that one LOTOS program is a correct implementation of another. 2. Language containment. If the descriptions are finite state machines, then equivalence Chapter 2. Issues in Verification 24 may be language equivalence. Some verification problems can be posed as language con-tainment problems. See [73] for an overview. 3. Logic. Equivalence is logical equivalence. Other logical relationships such as implica-tion may be suitable for showing that a model is a correct implementation of a specifica-tion. See [94] for an example. Other approaches exist (e.g. [74]). There is a close relation between equivalence checking and property checking. In CCS, two processes are bisimilar exactly when they satisfy the same set of formulas of the Hennessy-Milner logic [81]. Grumberg and Kurshan have shown a relationship between classes of CTL* formulas and language equivalence or containment problems [71]. 2.3 Proof Techniques Now that we have defined what we mean by program correctness, we can examine proof tech-niques. We first look at why formal proof techniques are important, and then examine some of these techniques: Section 2.3.1 discusses theorem proving; Section 2.3.2 discusses automatic techniques suitable for proving equivalences; and Section 2.3.3 discusses model checking, an approach that can be used to prove that models of systems satisfy temporal logic formulas. Sec-tion 2.4 presents the model checking that forms the basis of this thesis in more detail. Hand proof techniques are the most ubiquitous for a variety of reasons. They are powerful methods which allow a variety of proof techniques, appropriate informal arguments, and ab-stractions to be made. However, there are two important reasons why hand proofs are avoided in the context of verification, particularly hardware verification. First, proofs are extremely tedious to perform. Often they are not complex but have large amounts of intricate detail which is difficult and unpleasant for humans to keep track of. Second, errors are extremely likely. These two factors are related. Chapter 2. Issues in Verification 25 Some examples illustrate this. In [102], Milner presents the 'jobshop' example — a specifi-cation, implementation and a proof of weak bisimulation between the two. The proof has many errors. Most of these errors are trivial; however, there is a serious theoretical error on which the proof relies which was only detected much later. It must be emphasised that this is a best case scenario for hand proofs: the models and notation are fairly abstract, the proofs fairly short and interesting in themselves, and the person making the proof of undoubted mathematical ability.2 There are many other examples like this; they show how fallible and time-consuming hand proofs are (see [ 112]). The alternative to hand proofs are machine checked and automated proofs. A machine checked proof is a proof that has each step validated by a program that implements some logical inference system. An automated proof is one which is generated without human intervention according to some set of sound rules. Often the performance of these systems may depend on extra information given by the human verifier. These approaches have been applied to both the equivalence and property-checking types of problems. 2.3.1 Theorem Proving A theorem prover is a program that implements a formal logic. Using this program, statements in the logical system can be proved. Typically, the logical system consists of a set of axioms and inference rules, and the program ensures that all theorems are sound in that they are derived from the axioms by application of the inference rules. Although much work has gone into automatic theorem proving the key aspect is mechanically checking each step rather than the automatic derivation of the proof. Theorem provers can be used to prove theorems about any mathematical system. Within the 2For hardware verification the converse is true: the level of abstraction is low with intricate detail, the proofs are long and tedious and of no intrinsic interest, and few people doing verification are Turing Award winners. This criticism extends to other domains too. In a recent paper, Bezem and Groote present the verification of a network protocol in a recent paper [13]. The proof is very lengthy and detailed. The claim that this is not such a problem because the proofs are 'trivial' is unconvincing. Chapter 2. Issues in Verification 26 verification area, there is a strong theorem proving community and a range of different theorem provers have been used in verification tasks. Some examples of theorem provers and work done with theorem provers are: • HOL [68]. HOL is one of the first and best known theorem provers. It was built on work done on the development of LCF [69] in the 1970s (see [60] for a brief history). HOL implements a strongly typed higher-order predicate logic. The user's interface to HOL is through ML [107], a polymorphic, typed functional language. This interface promotes both security (by ensuring through the type system that only theorems proven in HOL can be proved) and flexibility by allowing the programmer access to a fully programmable script language. HOL has been used on the verification of a number of systems. • Boyer-Moore [17]. This theorem prover is based on a quantifier-free first order logic. It is heavily automated, although a user can (must?) 'train' the prover to deal with particular proofs. An example of a substantial verification effort using this system can be found in [86]. • PVS [106,105] is also theorem prover based on a typed, higher-order logic. It has a num-ber of decision procedures built in which allow a number of the proof obligations to be discharged automatically. See [112] for an example use of PVS. Theorem provers can be used for either equivalence or property checking. For example, if both the specification, S, and implementation, / , are logical formulas then asking whether S and / are equivalent means asking whether S = I is a theorem in the logic. In the property checking approach, Gordon shows how a simple theorem prover can be used to prove program correctness using the Floyd-Hoare logic [67]. Theorem provers have also been used in model checking; some work is directly relevant to this thesis. For example, Brad-field describes a 'proof assistant' for model checking //-calculus formulas over Petri nets. The Chapter 2. Issues in Verification 27 proof system is a tableau-based one (see below). At each step in the proof either the prover itself applies a proof rule, or the user does [18]. Sometimes, this type of approach leads to a hybrid system which uses both the automatic model checking algorithms described below and theorem proving approaches - this is discussed in more detail later. 2.3.2 Automatic Equivalence and Other Testing For certain systems which can be represented as labelled transition systems (such as certain classes of CCS agents), the Concurrency Workbench has algorithms for computing different kinds of equivalences and preorders [41]. The two advantages of using equivalences such as bisimulation over language equivalence are: • Bisimulation can distinguish behaviour which language equivalence can not. • There are significant computational advantages. For example, deciding regular language equivalence is PSPACE-complete, while the best known algorithm for deciding bisimu-lation between two regular processes is 0(m log n) where ra is the number of transitions in the process and n is the number of states [103]. For finite state systems, CCS agents can be represented using BDDs [54], from which equiv-alence relations can be computed [27]. Other work in this line includes a tool which can com-pute equivalence of LOTOS programs [56]. Other approaches can also be applied to transition systems, see ([1, 14]). 2.3.3 Model Checking Given a model of a system behaviour, M , and a temporal logic formula g interpreted over M, the model checking problem is to find out whether g holds of M, or whether M is a model of Chapter 2. Issues in Verification 28 g. Typically this is written as M \= g. Variations such as finding whether a set of states or a set of sequences of states satisfies the formula are used too. Model checking is a difficult problem: some useful versions are undecidable [55], and the satisfiability and model checking problems for even simple modal logics are NP-hard [63, 52]. For finite state systems, whether a structure is a model of a system can be determined directly from the satisfaction relation by explicit state enumeration: however, except for small systems this is rarely feasible. Tableau-based Methods The tableau-based method is one of the best-known methods and a number of variations have been implemented (the best known implementation is the Concurrency Workbench [41]). A l -though the underlying proof method is very different to the method of symbolic trajectory eval-uation, this method is of some relevance because tableau systems use rules of inference, and be-cause there has been much work in compositional reasoning. Good introductions to the tableau method are [19, 121]. Note that tableau methods do not always require the construction of the global state space. A tableau is a proof tree built from a root sequent of the form S1 h $ (this is the goal sequent). The tree is built using one of the tableau rules until all the leaves of the tree are terminals. If all the terminals are 'successful' then S \= $. The most important and difficult part of the tableau construction is dealing with the fixed-point operators, particularly the least fixed point operator. Stirling and Walker proposed a sound and complete tableau system for finite-state processes [ 122]. They showed that the tableau-construction always terminates, making this method an effective model checking scheme. They also show how the model checking algorithm of Winskel [127] can be incorporated in a tableau scheme. Bradfield extended the tableau approach to infinite state systems [ 19]. Dealing with the fixed Chapter 2. Issues in Verification 29 point operators is more complicated as the definition of a successful terminal takes some care. His approach is sound and complete. If 5" j= then using the rules automatically will derive S h <&, and S h $ will only be derived when £ |= Note, however, that if S ty= $, his algorithm may not terminate. Automatic Model Checking through State Exploration For finite state systems, it is feasible to model check some logics through state exploration meth-ods. Although model checking expressive temporal logics such as CTL* is very expensive (the problem is PSPACE-complete), for less expressive logics there are better results (note that model checking LTL is also PSPACE-complete). The best known result is one for model check-ing the subset of CTL* known as CTL [36]. This algorithm works by building the state tran-sition graph, and then using graph algorithms to label the states in the graph. The algorithm is 0(15110|) where l^ l is the number of states in the system, and |</>| is the size of the formula 4>. Recently this work has been extended to show how a richer logic CTL 2 can be model checked with the same complexity result [12]. Although the algorithm is linear in the size of the state space, this is a significant limitation since the size of the state space in many realistic systems is extremely large (a very small circuit with only 100 state holding components can have a reachable state space of size 2 1 0 0). State exploration methods can be extended to some types of infinite systems. Burkart and Steffen have developed a state exploration method for effective model checking of the alternation-free /^-calculus for context-free processes [29]. (A local model checking version based on tableaux has also been developed [85]). Chapter 2. Issues in Verification 30 Symbolic Model Checking For finite systems symbolic model checking methods are very popular and have had success in a number of applications. The use of BDDs has revolutionised model checking by providing a compact method for implicit state representation, thereby increasing by orders of magnitude the size of the state space that can be dealt with. (Other approaches exist too [14,46]: however BDDs seem to be most effective for a large class of problems.) The most well-known work based on symbolic model checking and BDDs has emerged from Carnegie Mellon University. A number of model checking algorithms for the modal p-calculus and other logics have been developed [26, 27]. The SMV verification system based on these ideas has successfully verified a range of systems [26, 98]. The basic idea of these approaches is to represent the transition relation of the system under consideration with a BDD. A set of states is also represented with a BDD. Given a formula of the temporal logic, the model checking task is to compute the set of states that satisfy the for-mula. The operations defined on BDDs allow the computation of operations such as existential quantification, conjunction etc. Using these BDD operations, it is possible to compute the set of reachable states and the set of states satisfying a given formula. Although these methods have had some success, the computational complexity and cost of model checking remains a significant stumbling block. Symbolic CTL model checking is PSPACE-complete in the number of variables needed to encode the state space [98]. A number of approaches have been suggested to improve the performance of the algorithm: compositional approaches; abstraction; and improving representational methods (for example, partitioning the next state relation [26]). That BDDs revolutionised automatic model checking indicates the importance of good and appropriate data structures, and motivates the search for new ones, and considerable work is Chapter 2. Issues in Verification 31 being done on extending BDD-style structures and developing new ones [24, 34, 99]).3 All these approaches to improve symbolic model checking need to be pursued. Circuits with wide data paths are not suitable for verification with SMV, which itself is unable to verify circuits with arithmetic data. However, by extending the method through the use of abstrac-tion [39] or more sophisticated data structures [35] such circuits can be verified. There are other symbolic model checking approaches. Symbolic trajectory evaluation — a central part of this thesis — is one. It differs from other approaches in the novel way in which the state space is represented. Although the logic which it supports is limited it has been success-fully used in hardware verification [8,47]. Full details are given later. Other symbolic methods have been proposed in [16, 43, 87]. Combining Theorem Proving and Model Checking Since combining model checking and theorem proving has considerable promise, research has been done in combining the two approaches in different technical frameworks. Seger and Joyce linked the HOL and Voss systems. This allows the HOL theorem proving system to reason about properties of a circuit by using the model checking facilities of Voss [117]. Although there are some similarities between the prototypes presented in this thesis, and the HOL-Voss system, there are two important distinctions: • One of the important uses of a theorem prover with the Voss system is to reason about objects that do not have concise BDD representations in all cases — for example, integer expressions. Rather than providing a general and powerful theorem prover such as HOL, simple semi-automated methods are used to provide the prototypes the ability to do this (see Section 6.2). Although not as powerful as HOL, it is much simpler. • The prototypes provide specialised theorem provers that implement a compositional the-ory for STE. The use of this compositional theory increases the power of the verification 3 Some of these approaches are applicable to other model checking approaches too. Chapter 2. Issues in Verification 32 approach significantly. Kurshan and Lamport have combined the COSPAN model checker with the TLP theorem prover [93]. The model checker proves properties of components of the system, which are then translated into a form suitable for the theorem prover. In order to prove the overall result, a number of sub-results need to be proved. Not only is the way in which composition is handled different to the way it is in this thesis, there are also two very important practical distinctions: first, their approach is not entirely mechanised; second their approach relies on linking two quite distinct tools and using two distinct formalisms, rather than one integrated tool and verification style. The style of the method of Hungar [84], who also links model checking and theorem prov-ing, is closest to the method of combining model checking and theorem proving proposed in this thesis. The model is given by a Kripke structure representing the semantics of an Occam program, and the properties are expressed in a variant of CTL. The results generated by model checking are combined using the LAMBDA theorem prover. The proof system consists of rules for inferring results using an assume-guarantee style of reasoning. The inference rules used are: embedding, modus ponens, conjunction and weakening. Given an Occam program consisting of a number of processes, properties can be proven of each process using the model checker, and the properties combined. An important distinction between the model used in [84] and the model used in this thesis is that in Hungar's framework, each process has its own model — the model for the entire program is the composition of these models. In the compositional theory proposed in this thesis, a model is given for the entire system, and it is not necessary to give a model for the components of the system. Raj an etal. have combined a //-calculus model checker with PVS by using the model checker as a decision procedure for PVS [111]. They demonstrate how such an integrated system can be Chapter 2. Issues in Verification 33 used. Using the ideas of Clarke et al. discussed below, they create an abstraction of a circuit to be verified. Using theorem proving they show that the abstraction has the required properties. Using model checking they show that the abstraction satisfies the specification. In an alternative approach, Dingel and Filkorn verify abstractions of a system using model checking, using certain assumptions about the system environment [49]. Theorem proving is used to prove the correctness of the abstraction and to ensure that the system environment as-sumptions are met. 2.4 Symbolic Trajectory Evaluation This section briefly outlines the existing STE based approach. This is useful in the later discus-sion and will help illustrate some of the novel aspects of the thesis. Symbolic trajectory eval-uation was first proposed in [23] and the full theory can be found in [116]. Good examples of verification using STE can be found in [8,47]. This section is heavily based on the presentation of STE found in [77]. The model of a system is simple and general, a tuple M = ((<S, C ), Y ) , where (<S", Q ) is a complete lattice (<S being the state space and C a partial order on S) and Y is a monotone successor function Y: S —> S. A sequence is a trajectory if and only if Y(<rl) Q al+1 for i > 0. 2.4.1 Trajectory formulas The key to the efficiency of trajectory evaluation is the restricted language that can be used to phrase questions about the model structure. The basic specification language used is very sim-ple, but expressive enough to capture many of the properties we need to check. A predicate over S is a mapping from S to the lattice { F , T } (where F C T ) . Informally, a predicate describes a potential state of the system: e.g., a predicate might be (A is x) which Chapter 2. Issues in Verification 34 says that node A has the value x. A predicate is simple if it is monotone and there is a unique weakest s G S for which p(s) = T . TF, the set of trajectory formulas is defined recursively as: 1. Simple predicates: Every simple predicate over S is a trajectory formula. Simple pred-icates are used to describe simple, instantaneous properties of the model. 2. Conjunction: (Fi A F 2) is a trajectory formula if F i and F 2 are trajectory formulas. Con-junction allows the combination of formulas expressing simpler properties into a formula expressing a more complex property. 3. Domain restriction: (e —>• F) is a trajectory formula if F is a trajectory formula and e is a boolean expression over a set of boolean variables, V. Through the use of boolean variables, a large number of scalar formulas (formulas not containing variables) can be concisely encoded into one symbolic formula. 4. Next time: (NF) is a trajectory formula if F is a trajectory formula. Using the next time operator allows the expression of properties that evolve over time. An interpretation of variables is a function, <f> : V ->• { F , T } . An interpretation of variables can be extended inductively to be an interpretation of expressions. The truth semantics of a trajectory formula is defined relative to a model structure, a trajectory, and an interpretation, <f>. Whether a sequence a satisfies a formula F (written as a \= F) is given by the following rules. 1. a0a \= p iff p(oo) = T . 2. a (= (Fi A F 2) iff cr \= Fx and a |= F 2 3. cr (= (e ->• F) iff 0(e) => (cr |= F) , for all interpretations, </>. 4. CT°C> \=NF iff a |= F . Chapter 2. Issues in Verification 35 Given a formula F there is a unique defining sequence, Sp, which is the weakest sequence that satisfies the formula.4 The defining sequence can usually be computed very efficiently. From Sp a unique defining trajectory, Tp, can be computed (often efficiently). This is the weakest tra-jectory which satisfies the formula— all trajectories which satisfy the formula must be greater than it in terms of the partial order. If the main verification task can be phrased in terms of 'for every trajectory a that satisfies the trajectory formula A, verify that the trajectory also satisfies the formula C, verification can be carried out by computing the defining trajectory for the formula A and checking that the formula C holds for this trajectory. Such results are called trajectory assertions and we write them as |= \ A = ^ > C ). The fundamental result of STE is given below. Theorem 2.1. Assume A and C are two trajectory formulas. Let T& be the defining trajectory for formula A and let 8c be the defining sequence for formula C. Then f= t\ A = g > C } iff 8c E T ^ . D A key reason why STE is an efficient verification method is that the cost of performing STE is more dependent on the size of the formula being checked than the size of the system model. STE uses BDDs for manipulation of boolean expressions. 2.5 Compositional Reasoning The main problem with model checking is the state explosion problem — the state space grows exponentially with system size. Two methods have some popularity in attacking this problem: compositional methods and abstraction. While they cannot solve the problem in general, they do offer significant improvements in performance. Compositional reasoning is a critical aspect of program verification. The following advan-tages are stated in [51: 4 'Weakest' is defined in terms of the partial order. Chapter 2. Issues in Verification 36 • Modularity: if a module of a system is replaced, only the module need be verified; • In design or synthesis it is possible to have undefined parts of a system and still be able to reason about it; • By decomposing the verification task, verification can be made simpler; • Re-use of verification results is promoted. The difficulty of compositional reasoning is that often it is the case that a particular compo-nent may not have a property that we desire of it when placed in a general environment. How-ever, when placed in the context of the rest of the system, then it does display the property. See [3] for some discussion of the issues involved in this type of reasoning. For tableau-based methods, a number of approaches have been suggested. Andersen et al. have proposed a proof system for determining the whether processes of a general process alge-bra [5] satisfy a formula. They show that a set of 39 inference rules is sound, and - for a class of finite-state systems - is complete. Although this is an important contribution, it is difficult to assess the impact of this work without substantive examples. Furthermore, to be practical I believe the proof system needs some form of mechanical assistance. In related work, Berezine has proposed two model checking algorithms for fragments of the yu-calculus [11] (here model checking asks whether p f= $ — does the process p satisfy $). Both methods can be used to verify problems of the form pxq \= $, where pxq represents the composition of processes p and q. The first takes the problem and constructs a formula $ p such that q \= $ p iff p x q \= $. The second constructs two formulas $ p and <&9 such thatp x q (= $ iff q |= $ p and p \= <!>„. As the work is preliminary, it is difficult to assess the applicability and effectiveness of this approach. Compositional techniques have been proposed for symbolic model checking. Clarke et al. have proposed a method for systems of concurrent processes [40]. To model check P\\E (= 4> may not be computationally feasible (where P represents a process of interest, and E represents Chapter 2. Issues in Verification 37 the environment). They show how an interface process, A, can be constructed such that under certain conditions P\\E \= 4>iiP\\A \= <j>. The point of this is that the state graph of P \ \ A can be considerably smaller that of P\\E. Although this method has theoretical interest and there are examples of systems for which it works, it has not been established how applicable this method is, and how easy (in terms of human and computation cost) it is to establish the conditions for correct application. Another approach to compositional reasoning — modular verification — is based on defin-ing a preorder relation,^, between models [72,95]. This preorder is based on a simulation rela-tionship between the models and has the property that if M1 •< M2 and M2 (= 0 then Mx (= 0 . Suppose we wish to show that a process M when placed in its environment satisfies a property 0 . While M may not in general satisfy 0 , it may satisfy it whenever its environment satisfies another property 0 . Given the formula 0 , there exists a 'tableau' M$ which is the strongest element in the preorder which satisfies 0 . If E •< and M\\M^ \= 0 , then by the property of the preorder, M\\E \= 0 . The verification therefore includes proving the simulation relation and performing model checking. Both of these steps are automatic, using symbolic algorithms. This method is only applicable to finite state systems. Aziz et al. propose a compositional method dependent on the formula being checked [6]. The model is represented as a composition of state machines. Given a formula to be checked, an equivalence relation is computed for each machine which preserves the truth of the formula. Using these equivalence relations, quotient machines are constructed and the composition of these machines computed. This composition will have a smaller state space than the original composition and can be used to determined the correctness of the formula. Other compositional approaches exist too. Some of these focus on the question of the re-finement of a specification into an implementation. They tend to use hand proofs. Examples of other approaches include [3, 89, 93]. Chapter 2. Issues in Verification 38 2.6 Abstraction The idea behind abstraction is that instead of verifying property / of model M, we verify prop-erty $A of model MA and the answer we get helps us answer the original problem. The system MA is an abstraction of the system M. One possibility is for the abstraction MA to be equivalent (e.g. bisimilar) to M. This some-times leads to performance advantages if the state space of MA is smaller than M, but usually this type of abstraction is used in model comparison (e.g. as in [74]). Typically, the behaviour of an abstraction is not equivalent to the underlying model. The ab-stractions are conservative in that MA satisfies /A implies that M satisfies / (but not necessarily the converse). Some examples of abstraction methods are [50, 70, 83, 95]. In hardware verification, abstraction is particularly needed in dealing with the data path of circuits. A drawback of abstraction is that it takes effort to both come up with the suitable ab-straction (see [37, 123]) and prove that the abstraction is conservative. For an example of this type of proof see [28]. Clarke etal. define abstractions and approximations [39]. They show how an approximation can be abstracted from the program text without having to construct the model of the system. They provide a number of possible abstractions: congruence modulo an integer (the use of the Chinese remainder theorem); representation by logarithm; single-bit and product abstraction; and symbolic abstraction. They show how this is used on a number of examples. 2.7 Discussion Although equivalence checking is also attractive, this thesis explores one model checking be-cause of its success in verifying large state spaces. Moreover, in some situations it is not appro-priate or possible to have a formal model to compare an implementation against (although work such as [130] offers some ideas in how such a model could be built from a set of properties). Chapter 2. Issues in Verification 39 Theorem provers and model checkers both have strong adherents because both methods have had successes. However, they both have weaknesses. Automatic verification techniques have the advantage of being automated, but have limitations on the size of the systems that they can deal with, and theorem proving methods, while very powerful, are still computationally in-tensive and require a great deal of skill. Work such as [77, 84, 93, 117] among others shows that there is much to be gained from combining the approaches. The vision adopted in this research is that symbolic model checking is used to prove low-level properties of the system which would be very tedious for the theorem prover, while the theorem prover — partly automated — is used to prove higher-level properties. Efficient model checking is very important. Although tableau-based methods are powerful and attractive in some situations, BDD-based methods are more appropriate for finite state sys-tems, especially VLSI circuits. Although progress has been made, much work remains to be done to improve performance by examining issues such as abstraction, composition and meth-ods for state and transition relation representation. There are many different criteria for evaluating verification methods, depending on appli-cation and setting (for some discussion of this, see [119]). Three criteria for evaluating the ap-proaches discussed above are: 1. Range of application; 2. Performance; 3. Degree of automation/ease of use. For a fuller discussion of the use of verification methods in industry, see [119]. A problem with this area is that because verification is very difficult, methods tend to be suited for particular applications. Often it is difficult to compare approaches because they solve different problems. The types of properties to be checked for and the way in which the model is represented are critical. For example, verifying multiplier circuitry modelled at the switch Chapter 2. Issues in Verification 40 level where timing is critical is a very different proposition to dealing with a very high level description of an algorithm where timing is not an issue. Furthermore, because many of these problems are so difficult (i.e. are NP-hard) analytic categorisations of different algorithms are not always very useful. Empirical results are also difficult to analyse since verifications are run with systems on different hardware architectures and written in different languages. Particularly difficult to measure is how easy the verification method is to use (how automatic is an automatic verification method) for different classes of user. Many examples in the literature give a few examples but fail to give convincing evidence that the method will work on a larger class of problems. Al l of this is exacerbated by a lack of published empirical results. Work with detailed perfor-mance figures is available (such as [26]) but important theoretical contributions such as [5, 72] come with no performance results and only small examples to illustrate the applicability of the method. The importance of gaining more experimental results has been recognised ([26] is a good ex-ample), and the IFIP working group on hardware verification has recently established a bench-mark suite to help facilitate comparative work [91]. Chapter 7 presents some experimental data in order to evaluate the methods proposed in this thesis. Chapter 3 The Temporal Logic TL This chapter introduces and defines the quaternary temporal logic at the core of the research. Section 3.1 describes the model over which formulas of the logic are interpreted: a complete lattice is used to represent the set of instantaneous states, and a monotonic next state function is used to represent system behaviour. This gives a way of formally describing an implementation of a system such as a VLSI design. Section 3.2 defines Q, a quaternary logic, and proves ele-mentary properties of this logic. Using Q as a base, the quaternary temporal logic TL is defined in Section 3.3. The syntax of TL formulas is given; the truth of these formulas is defined with respect to sequences of states of the model. TL gives a way of describing intended behaviour. The primary application of the theory presented in this thesis is for circuit models. For con-venience, and as these models have useful properties, it is appropriate to specialise the temporal logic for circuit models. This is discussed in Section 3.4. A critical question is whether the model satisfies the intended behaviour. Section 3.5 is a precursor for the discussion of this question in Chapter 4 by presenting alternative semantics for TL; these semantics illustrate the idea on which the model checking approach of Chapter 4 is based. 3.1 The Model Structure The model structure ((<S, C ), TZ, Y) represents the system under consideration. 41 Chapter 3. The Temporal Logic TL 42 • S, a complete lattice under the information ordering • , represents the state space. Let X be the least element in S. (When S = Cn, then X = Un.) • 71 C S, the set of realisable states, represents those states which correspond to states the system could actually attain — S — 71 are the 'inconsistent' states, which arise as artifacts of the verification process. Which states are realisable and which are inconsistent is entirely up to the intuition of the modeller; the entire state space could be realisable, or only part of it. Verification conditions will be of the form: do sequences that satisfy g also satisfy hi Distinguishing unrealisable behaviour from realisable behaviour allows the detection of cases where verification conditions are vacuously satisfied: if it is the case that no se-quences with only realisable states satisfies g then the verification condition may indeed by satisfied. However, it is likely that either the specification or implementation are wrong. On the other hand, it may be that for all sequences of realisable states the verification conditions are satisfied, but that some sequences with unrealisable behaviour satisfy g but do not satisfy h. If we consider the set of all sequences, the verification condition will fail; if we consider only the sequences of realisable states the verification conditions succeed. Thus, the concept of realisability allows the modeller to deal with inconsistent informa-tion in a sensible way: detecting vacuous results and ignoring degenerate cases. There is a technical requirement: 71 must be downward closed, so that if x 6 11, and yQx then y <G 1Z. This makes computation much easier and has a sound intuitive basis. Intuitively, if a state is not realisable, it is because it is 'inconsistent'; any state above it in the information ordering must be even more 'inconsistent' and thus also not realisable. Conversely, if a state is 'consistent', then a state below it in the information ordering will Chapter 3. The Temporal Logic TL 43 Figure 3.1: Inverter Circuit also be'consistent'. • Y : S —> S is a monotonic next state function: if s Q t then Y(s) C Y ( i ) . Although the next state function is inherently deterministic, the partial-order structure of the state space can model non-determinism to some extent. A useful analogy here is that a non-deterministic finite state machine can be modelled by a deterministic one — in the deterministic machine, a state represents a set of states of the non-deterministic machine. In the same way, in our partial-order setting, a state represents all the states above it in the partial order. By embedding a flat,1 non-deterministic model in a lattice, the model becomes deterministic. The next state function Y can be thought of as a representation of the next state relation {{s,t)<ES xS :Y{s)Qt}. Therefore, although technically we deal with a deterministic system, the deterministic system models non-deterministic behaviour. Example For synchronous circuit models, the most important way in which non-determinism is used is to model input non-determinism, that is the non-deterministic behaviour of inputs of a circuit. One way of modelling the fact that inputs of the circuit are controlled by the environment, not by 1 A set without any structure. Chapter 3. The Temporal Logic TL 4 4 the circuit, is to have a non-deterministic next state relation. For example, consider the simple inverter circuit of Figure 3.1. If the model structure uses a flat state space, the state space and next state relation shown in Figure 3.2 are likely candidates for the model structure. For the next state relation, for each row there is a transition from the state in the first column to each state in the second column. { ( L , L ) , ( L , H ) , ( H , L ) , ( H , H ) } (a) State space From To (L ,L) (L ,H) (H,L) (H,H) ( L , H ) , ( H , H ) ( L , H ) , ( H , H ) ( L , L ) , ( H , L ) ( L , L ) , ( H , L ) (b) Next state relation Figure 3.2: Inverter Model Structure — Flat State Space If a partial order state space is used, one way of constructing the model structure is shown in Figure 3.3. Figure 3.3(a) shows the state space, and Figure 3.3(b) gives the next state function. A c entry in the table means that this row holds for all c G C (Z,Z) (L,Z) (ZJ0(ZjH)(H,Z) X X X (L ,L) (L,H)(H,L)(H,H) X X X (L,U)(U,L)(U,H)(H,U) (U,U) (a) State space From To (Z,c) (U,Z) (L,c) (U,H) (H,c) (U ,L) ( M (U ,U) (b) Next state function Figure 3.3: Lattice-based Model Structure Chapter 3. The Temporal Logic TL 45 Branching time versus linear time One of the key issues of temporal logics is whether the logic is linear time or branching time. Since the next state function of the model is deterministic, and since in practice all temporal formulas used are finite, the question of whether the logic used is linear or branching time is rather a fine point. Nevertheless, as trajectory evaluation has been described as a linear time approach [26, p. 403], and as non-determinism can be represented by the model structure, the topic should be discussed briefly. 'Logically the difference between a linear and a branching time operator resides with the possibility of path switching . . . ' [120]. The model structure proposed here deals with non-determinism by merging paths where necessary. If in the flat model structure there are non-deterministic transitions from state s to states i i , . . . , tj, in the lattice model structure there is a state t, such that t C t% for i = 1,... , j, and a single deterministic transition from s to t. Consider the non-deterministic transition diagram shown in Figure 3.4. The difference between linear time and branching time semantics is nicely illustrated here. Suppose an instantaneous property g is true in states si and s3 and false in all other states. With a linear time semantics, we can express the property that in all runs of the system, there exists a state from which time all states in the run have the property g. This cannot be expressed in a branching time semantics: for example, in the run S0.s3.s3.s3 • • •, a branching time semantics always detects the possibility of path-switching and takes into account the potential of a transition from s3 to s2. Using a lattice structure, instead of using the set S = {s0,... , s3} as the state space, we use a subset of the power set of S. The state space shown in Figure 3.4 is embedded in the lattice state space shown in Figure 3.5(a) (here, the partial order is shown by dotted lines). The next state relation of Figure 3.4 is replaced with the next state function shown in Figure 3.5(b) (note, only states reachable from s0 are shown in this transition diagram). Note how the two non-deterministic transitions from s 0 to si and s 3 in Figure 3.4 are merged into one deterministic Chapter 3. The Temporal Logic TL 46 Figure 3.4: Non-deterministic state relation transition from s0 to s 4 shown in Figure 3.5(b). {} {so} {si} {s2} {s3} {S1 ,S 3 } {S2,S3> {«l ,S2,-53} { s i , s 3 } {si,s2,s3} {s0,si,s2,s3} (a) Partial order (b) Transition function Figure 3.5: Lattice State Space and Transition Function By using the model structure adopted here, non-deterministic paths that exist in a flat model structure are merged, losing information in the process. It is possible to ask the question whether in all runs of the system property g holds; however, the answer returned will be 'unknown.' So, Chapter 3. The Temporal Logic T L 47 it would not be accurate to characterise the logic proposed here as either linear time or branch-ing time, since the distinction between the two is blurred. As the expressiveness of the logic and the type of non-determinism used in models is limited compared to many other verification approaches, this question of branching versus linear time semantics is not nearly as important as in other contexts. In the inverter example above, consider the sequence a = (L, H)(U, H ) . . . in the partial order model. This represents both of the sequences ( L , H ) ( H , H ) . . . ( L , H ) ( L , H ) . . . . in the flat model structure. Proving a property of a will take into account the branching structure at each state in the sequence: but it does so in a trivial way by considering (at the same time) both possible values of the input node of the inverter. 3.2 The Quaternary Logic Q The four values of Q, the quaternary propositional logic used as the basis of the temporal logic, represent truth, falsity, undefined (or unknown) and overdefined (or inconsistent). Such a logic was proposed by Belnap [10], and has since been elaborated upon and different application areas discussed in a number of other works [59, 125]. This section first gives some mathematical background, based on [58, 113], and then definitions are given and justified. A bilattice is a set together with two partial orders, -< and <, such that the set is a complete lattice with respect to both partial orders. A bilattice is distributive if for both partial orders the meet distributes over the join and vice-versa. A bilattice is interlaced if the meets and joins of both partial orders are monotonic with respect to the other partial order. In our application domain, we are interested in the interlaced bilattice <2 = { l , f , t , T } Chapter 3. The Temporal Logic TL 48 where the partial orders are shown in Figure 3.6. f and t represent the boolean values false and true, J_ represents an unknown value, and T represents an inconsistent value. B denotes the set {f,t} (so Be Q). The partial order •< represents an information ordering (on the truth domain), and the partial order < represents a truth ordering. (Note, the ordering C is used for comparing states and the ordering -< is used to compare truth values). It is very important to emphasise at this point that different lattices are used to represent truth information and state information. Informally, the information ordering indicates how much information the truth value con-tains: the minimal element J_ contains no truth information; the mutually incommensurable elements f and t contain sufficient information to determine truth exactly; and the maximal el-ement T contains inconsistent truth information. The truth ordering indicates how true a value is. The minimum element in the ordering is f (without question not true); and the maximum el-ement is t (without question true). The two elements J_ and T are intermediate in the ordering — in the first case, the lack of information places it between f and t, and in the second case, inconsistent information does. Formally, the partial orders < and X are relations on Q (i.e., subsets of Q x Q). It is useful to consider the relations as mappings from pairs of elements to a truth domain (if two elements are ordered by the relation we get a true value, if not a false value). Informally, therefore, we can consider the partial orders as mappings from Q x Q to B. < Figure 3.6: The Bilattice Q Chapter 3. The Temporal Logic TL 49 For representing and operating on Q as a set of truth values, there are natural definitions for negation, conjunction and disjunction, namely the weak negation operation of the bilattice and the meet and join of the Q with respect to the truth ordering [58]. These definitions are shown in Table 3.1, and have the following pleasant properties, which makes it suitable for model-checking partially-ordered state spaces. • The definitions are consistent with the definitions of conjunction, disjunction and nega-tion on boolean values. • These operations have their natural distributive laws, and also obey De Morgan's laws (so, the definition of disjunction was redundant). • Efficiency of implementation. The quaternary logic is represented by a dual-rail encod-ing, i.e. a value in Q is represented by a pair of boolean values, where: - L = (F,F), -f = (F,T), - t = (T,F), - T = (T,T). If a is represented by the pair (a l 5 a2) and b by the pair (h,b2) then a A b is represented by the pair (ai A 6i, a 2 V b2), a V b by the pair (ax V bu a2 A b2) and ->a = (a2, ax). These operations on Q can be implemented as one or two boolean operations. A 1 f t T V J_ f t T 1 J_ f f _L J_ 1 t t f f f f f f J_ f t T t J_ f t T t t t t t T f f T T T t T t T —I _L f t t f T T Table 3.1: Conjunction, Disjunction and Negation Operators for Q Chapter 3. The Temporal Logic TL 50 Implication, =>-, is denned as a derived operator a =>• b = ->a V b. There is an intuitive explanation of the dual-rail encoding and the implementation of the operators. If q is encoded by the pair (a, b), a is evidence for the truth of q, and b is evidence against q. To compute qi A g2, we conjunct the evidence for qi and q2 and take the disjunction of the evidence against. The computation of qi V q2 is symmetric. And if a is the evidence for q and b the evidence against q, then 6 is the evidence for -iq and a is the evidence against ->q. However nice this intuition, the definition of Q is not without problem. In the context of a temporal logic, it is hard to justify the definition that T A _L= f. Similarly, since T V J_= t, if t = qx V q2 it is not necessarily the case that either qx or q2 is t. Nevertheless it is the 'classical' definition, and is convenient because the dual-rail encoding is efficient. Other definitions are possible too (for example, defining the operations so that if T is an operand, the result of the operation must be T too) and might simplify some of the proofs in later sections and chapters; the particular definition adopted in this thesis is not fundamental. The following properties of Q are used in subsequent proofs. The first lemma is a conse-quence of the property that negating a value does not increase the information available. Lemma 3.1. 1. If q ^ -iq, then q ^ -><?. 2. If qx •< q2, then -iqx ^ ^q2. Proof. 1. If q e {-L, T}, then q = ->q. If q = t, then -.g = f: t ^ f. Similarly, f ^ t. 2. (a) If gi =1, -gi =1. L ^ q for all q. (b) If qi = t, then ->qi = f and q2 £ {t, T } . Therefore, ->qi ^  ->q2 (c) Similarly, if qi = f, - ^ j •< ~^q2 Chapter 3. The Temporal Logic TL 51 (d) If c/i = T , then qx = -^qx = q2 = ->q2. Result follows by refiexivity of partial order. • The second lemma extracts some trivial properties of Q from Table 3.1; these are useful when trying to deduce values of sub-formulas from values of formulas. Lemma 3.2. 1. If t -< qi V q2, then t -< qi for at least one of i = 1,2. 2. If t = qi A q2, then t = qi for both of i = 1,2. If t -< qi A q2, then t -< qi for both of i = 1,2. 3. If f X c/i A q2, then f ^ qi for at least one of i = 1,2. 4. If f = c/i V c/2, then f = c/; for both of i = 1,2. If f d: qi V c/2, then f ^ c/i for both of i = 1,2. Consider Table 3.1. 1. c/i = t or c/2 = t> or c/i =_L and q2 = T , or c/! = T and c/2 = ± . 2. Only when qi = q2 = t is c/i A c/2 = t. Only for the four bottom, right entries of the table is q\ A q2 X t. 3. c/i = f or c/2 = f; or c/j = J_ or c/2 = T ; or qx = T or q2 =± 4. Only when q1 = q2 = f is c/i V g2 = f. Only for the four upper, left entries of the table is f -< qi V q2. • Chapter 3. The Temporal Logic TL 52 3.3 An Extended Temporal Logic The propositional logic Q is used as the base for the temporal logic TL. This section first presents the scalar version of TL, the fragment of TL not containing variables, and then presents the sym-bolic version of TL, which contains variables. 3.3.1 Scalar Version of TL Given a model structure ( ( £ , C ), 71, Y ) , a Q-predicate over S is a function mapping from S to the bilattice Q. A Q-predicate, p is monotonic if s C t implies that p(s) z< pit) (monotonicity is defined with respect to the information ordering of Q). A Q-predicate is a generalised notion of predicate, and to simplify notation, the term 'predicate' is used in the rest of this discussion. Example 3.1. Take, as an example, the state space S given in Figure 1.1 on page 9. Define g, h : S -> Q by: ( _L when s = s0 f whens e {si,S2,s 4,s 5,s 6} , , , , , and h(s) = < t when 5 € {s 3 ,s 7 , s 8) 1 whens e {s 0,s 2,s 6} f when s G {si, s 4, s5} t whens € {s 3, s7, s8} ^ T when s = sg T when s = sg Figure 3.7 depicts these definitions graphically, g and h are Q-predicates. The same state space and functions will be used in subsequent examples. • Note that in the example, s3 is the weakest state for which g(s) = t. In a sense, s3 partially characterises g, and we use this idea as a building block for characterising predicates, motivat-ing the next definition. Given a predicate p, we are interested in the pairs (sq, q) where sq is a weakest state for which p(s) = q. Chapter 3. The Temporal Logic TL 53 SgT SgT s 4 f s 5 f s 6 f s 7 1 s81 s 4 f s 5 f S 6 ± S 7 t S 8 t \ l \ l 1 / S l f ^(5) /»(S) Figure 3.7: Definition of <? and h Definition 3.1. (sq,q) G S x Q is a defining pair for a predicate g if g(sq) = q and Vs € 5, g(s) = <? implies In Example 3.1 (53, t) is a defining pair for g. If g(s) = t then s 3 C 5. However, there is no defining pair (sf,f) for g since there is no unique weakest element in S for which g takes on the value f. On the other hand ( s i , f) is a defining pair for h. Definition 3.2. If g: S ->• Q then Z%) = {(sg, c/) G <5 x Q : (sq, q) is a defining pair for g}, is the defining set of 5. • Using this definition it is easy to compute the defining sets of the functions g and h that were defined in Example 3.1. If a monotonic predicate has a defining pair for every element in its range, then its defin-ing set uniquely characterises it (see Lemma 3.3 below). Such monotonic predicates are called simple predicates and form the basis of our temporal logic. The following notation is used in that s , C s . • D(0) = {(3 O ,±),(53 , t) ,(59 ,T)} D(/l) = {(5o ,-L),(si , f) , (s3 , t),(5 9 ,T)} Chapter 3. The Temporal Logic T L 54 the next definition and elsewhere in the thesis: if g : A —> B is a function then g(A) = {g(a) : a G A} is the range of g. Definition 3.3. A monotone predicate g: S —>• Q is simple if Vg € g(<S), 3(sg, g) G -D(fi'). • In Example 3.1, / i is simple since every element in the range of h has a defining pair. On the other hand, g is not simple since there is no defining pair (sf,f). Informally, g is not simple since we cannot use a single element of S to characterise the values for which g(s) = f. Definition 3.4. Some of the important simple predicates are the constant predicates. For each q G Q, the con-stant predicate Cq(s) = q has defining set D(Cq) = {(X, q)} and so is simple. • Note that simple predicates need not be surjective; the only requirement is that if q is in the range of a simple predicate, there is a unique weakest element is <S for which the predicate attains the value q. A trivial result used a number of times here is that the bottom element of S must be one of the defining values for every predicate: this has the consequence that every element in S is ordered (by being at least as large as) with respect to one of the defining values of each monotonic predicate. Theorem 3.3. If g,h: S -> Q are simple, then D(g) = D(h) implies that Vs G <S, g(s) = h(s). Proof. See Section A. 1 • Chapter 3. The Temporal Logic TL 55 This result is used later to show the generality of the definitions. Definition 3.5. Let G be the set of simple predicates over S. • We now use G to construct the temporal logic. Definition 3.6 (The Scalar Extended Logic — TL). The set of scalar TL formulas is defined by the following abstract syntax TL::= G | T L A T L | ->TL | Next TL | T L U n t i l T L • The semantics of a formula is given by the satisfaction relation Sat (Sat : 5W x TL -> Q). Given a sequence a and a TL formula g, Sat returns the degree to which a satisfies g. Suppose g and h are TL formulas. Informally, if g is simple, a sequence satisfies it if g holds of the initial state of the sequence. Conjunction has a natural definition. A sequence satisfies ->g if it doesn't satisfy g. A sequence satisfies Next g if the sequence obtained by removing the first element of the sequence satisfies g. A sequence satisfies g Un t i l h if there is a k such that the first k — 1 suffixes of the sequence satisfy g and the fc-th suffix satisfies h? Note that in the definitions below, A and -• (bold face symbols) are operations on TL formulas, whereas A and -• are operations on Q. Comment on notation: Sequences are ubiquitous throughout this thesis. There is extensive need to refer to suffixes and individual elements of these sequences. Moreover, individual ele-ments of sequences can be vectors, and on top of this, it is often useful to talk about different sequences. It is plausible to use subscripts to describe all these, but, unfortunately, there is also often a need to refer to these different concepts in close proximity to each other and so there is 2 In the special case of g and h being simple, this is equivalent to saying that g is true of the first k — 1 states in the sequence, and h is true of the A;-th state. Chapter 3. The Temporal Logic TL 56 great opportunity for confusion. To avoid this confusion, a slightly more cumbersome notation is used than might otherwise be desirable. This notation is summarised below. 1. Lower case Greek letters, <T, r , . . . are used to refer to sequences. 2. If <J = S 0 S 1 S 2 . . . , then <T,- denotes s8\ 3. If cr = 5 0 5 i 5 2 . . . is a sequence, o->; refers to the sequence s;s,-+i . . . , which is a suffix of c. 4. Superscripts are used to refer to different sequences, e.g. a1, a2. Although this conflicts with the usual use of superscript in mathematical text, there is little chance of confusion since 'squaring' states is not defined. 5. If s is a state which is a vector of elements, then s[k] refers to the fc-th component of s. For example, cr>, refers to the suffix of the sequence a3 obtained by removing first i elements of cr 3. (<r>t-)o[fc] = ° f [&] is the k-th component of the i-th element in the sequence a3. Definition 3.7 (Semantics of TL). Let cr = s 0 si5 2... G <5W: 1. tfgeGthcnSat(cr,g) = g(s0). 2. Sat(cr, g Ah) = Sat(cr, g) A Sat(a, h) 3. Sat(a,->g) = ->Sat(a,g) 4. Sat(a,Nextg) = Sat(o>i,g) O O i—1 5. Sat(a,g\Jntilh) = V ( ( A Sat(cr>jyg)) A Sat(a>i,h)) • Note that this is the strong version of the until operator: g need never hold, and h must eventually hold. The until operator is defined as an infinite disjunction of conjunctions. That this is well defined comes from Q being a complete lattice with respect to the truth ordering. Recall that A Chapter 3. The Temporal Logic T L 57 is defined as the meet of the truth ordering, and V is defined as the join. Moreover, in a complete lattice, all sets have a meet and join. Therefore each conjunction is well defined, and thus the disjunction of the conjunctions is too. An intuition to support that the definition is well behaved k i-1 is that the sequence a^= V (( A Sat(a>j,g)) A Sat(a>i: h)) is an increasing sequence in Q. i=o j = o ~~ — As Q is finite and bounded above, the sequence (ak) has a limit. Using these operators we can define other operators as shorthand. Definition 3.8 (Other operators). Some that we shall use are:-• Disjunction: gV h = ~>((->g) A(->h)). • Implication: g =>• h = (-^g) V h. • Sometime: Exists*/ = t U n t i l g. (Some suffix of the sequence satisfies g.) • Always: Global g = -i(Exists ->g). (No suffix of the sequence does not satisfy g, hence all must satisfy g). • Weak until: gVntilWh = (gUntil h) V (Globalg). (This doesn't demand that h ever be satisfied.) Using the operators defined above, other operators can be defined, including bounded versions of Global , Exis ts , Unti lWand U n t i l and aperiodic operator Periodic that can be used to test the state of the system periodically. Other operators — for example, periodic versions of the until operators etc. — are possible too. Two very useful derived operators are the gener-alised version of Next and the bounded always operator. • The generalised Next operator is defined by: Next°# = g Nextk+1g = Next (Next^) Chapter 3. The Temporal Logic TL 58 • The bounded always operator, defined by n k ] k G l o b a l [ (a„ , bo),... , ( a n , &„)] fir = A ( A N e x t g), j=0 k=aj asks whether g holds between ctj and bj for j = 0,... , n. • If q = Sat(a, g) then we say that a satisfies g with truth value q, and if q r< Sat(a, g), then we say that cr satisfies g with truth value at least q. One of the key properties of the satisfaction relation is that it is monotonic. Lemma 3.4. The satisfaction relation is monotonic: for all a1, a2 £ S", ifq= Sa^a1 ,g) and a1 Q a2, then d Sat(a2,g) Proof. If g is simple, this follows since g is monotonic. Since the operators of Q (conjunction, disjunction and negation) are all monotonic with respect to their operands, the monotonicity of TL follows by structural induction. Again, for the until operator this relies on Q being a com-plete lattice. • Although the basis of the logic is G, the set of simple predicates, Theorem 3.5 shows that all monotonic predicates can be expressed in TL. If g is a TL formula not containing any temporal operators, then its semantics with respect to a sequence is determined solely by the value of the first element of the sequence. This implies that we can consider such a g to be a predicate from S —> Q. Formally, overloading the symbol g, we can define g : S —>• Q by g(s) = Sat(sXX... ,g) Chapter 3. The Temporal Logic TL 59 Theorem 3.5. For all monotonic predicates p : S -> Q, 3p' G TL such that Vs € S, p(s) = p'(s). Proof'. See Section A . l . • Consider the functions defined in Example 3.1, and let '_L when s G {s 0 , s i , s 4 , s 5 } f when s G { s 2 , s 6 } a (sj = < t when s G {s 3 , s 7 , s 8 } k T when s = s 9 = {(s 0 , -L), (s 2 , f ) , (s3,t), (s 9 , T ) } and so h' is simple. Note that g = hAh'. So, al-though g is not simple, it can be expressed as the conjunction of two simple predicates. The depth of a TL formula is a measure of how far in the future it describes behaviour of sequences; it shows how deeply nested next state operators are. Formally, if g is a TL formula, its depth, d(g) is defined by: d(g) = 0 for 5 G G d(g1Ag2) = max{g1,g2] d(->g) = d(g) <i(Nextg) = d(g) + 1 d(gi Untilg 2) = oo 3.3.2 Some Laws of TL This section presents some of the algebraic laws of TL. These are used extensively in proofs and are often used in practical situations. First, the equivalence of two TL formulas is defined. Definition 3.9. lfg,he TL, then g = h if Va G Su ,Sat{a,gi) = Sat(a,g2). • TL obeys most of the laws of a boolean algebra (Cf and Ct, two of the constant simple pred-icates, are identities under disjunction and conjunction respectively). However, the inverse or Chapter 3. The Temporal Logic TL 60 complementary laws do not hold (since the law of the excluded middle does not hold). More-over, if we do consider TL as an algebra, it has a more complex structure than a boolean algebra. Lemma 3.6 (Some algebraic laws of TL). 1. Commutativity: 9\Ag2 = 02 A 0i V g2 = 02 V 0 i . 2. Associativity: (0i V g2) V g3 = gi V (g2 V g3), (gx Ag2)Ag3 = gxA(g2A g3) 3. De Morgan's Law: 0i A g2 = ->{->gi V ->g2), gxVg2 = ~,(~,gi A ->g2). 4. Distributivity of A and V : hA{9lVg2) = (/i A 0i) V (hAg2), h\l{giAg2) = (h V 9l) A(h V g2) 5. Distributivity of Next : Next (51 Aflf2) = (Next$ri)A(Nextflf2), Next ( ^ V 52) = (Next ^ ) V (Next $r2). 6. Identity: g V C f = 0, 0 A C t = 0-7. Double negation: -'-'g = 0 Proo/. See Section A. 1.3. • 3.3.3 Symbolic Version Describing the properties of a system explicitly by a set of scalar formulas of TL would be far too tedious. Symbolic formulas allow a concise representation of a large set of scalar formulas. A symbolic formula represents the set of all possible instantiations of that symbolic formula. TL is extended to symbolic domains by allowing boolean variables to appear in the formu-las. Let V be a set of variable names { u i , . . . , vn}. It would be possible to define the symbolic Chapter 3. The Temporal Logic TL 61 version of the logic by introducing quaternary variables. However, in practice, it is boolean vari-ables which are needed, and introducing only boolean variables means that simpler and more efficient implementations of the logic can be accomplished. Furthermore, the effect of a qua-ternary variable can be created by introducing a pair of boolean variables. Definition 3.10 (The Extended Logic — TL). The syntax of the set of symbolic TL formulas, TL, is defined by:-TL::= G \ V \ TLATL \ ^TL | Next TL | TL U n t i l TL • The derived operators are defined in a similar way to Definition 3.8. For convenience, where there is little chance of confusion, the dots on TL formulas are omitted. The satisfaction relation is now determined by a sequence, a formula, and an interpreta-tion of the variables. An interpretation, cj>, is a mapping from variables to the set of constant predicates {f, t}, Let $ = {</>: 0: V —> {f, t}} be the set of all interpretations. Given an interpretation $ of the variables, there is a natural, inductively defined interpretation of TL for-mulas. For a given c6 £ we extend the definition from V to all of TL by defining: <j>{g) = gifg € G </>(-•#) = -><t>(g) <t>{9i A 92) = <l>(gi) A <j>(g2) c/>(Next g) = Next cf>(g) <f)(gi Unt i lg 2 ) = cf>(gi) U n t i l (j)(g2) This can be expressed syntactically: if 4>(vi) = hi, replace each occurrence of Vi with written as (j>(g) = g[bi/vu ... ,bn/vn}. Chapter 3. The Temporal Logic TL 62 Given a sequence and a symbolic formula, the symbolic satisfaction relations, SATq, deter-mine for which interpretations of variables the sequence satisfies the formula with which de-gree of truth. For example, we may be interested in the interpretations of variables for which a sequence satisfies a formula with truth value t, or the interpretations for which a sequence sat-isfies a formula with truth value at least t. By being able to determine for which interpretations a property holds with a given degree of truth, we are able to construct appropriate verification conditions. The scalar satisfaction relation, Sat, is used in the definition of the symbolic rela-tions. Definition 3.11 (Satisfaction relations for TL). A number of satisfaction relations are defined. • For q = f, t, T , SATq{a,g) = {<j> € $ : q = Sat{a, <f>{g))}. • For q = f, t, T , SAT^g) = {cj> e $ : q =< Sat{a, <j>{g))}. • Note that if g is a (symbolic) formula and 4> an interpretation, then SATq(a,g) C $, while Sat(a,(f)(g)) (E Q. Informally, • SATT^(a, g) is the set of interpretations for which g and ->g hold. Such results are unde-sirable and verification algorithms should detect and flag them. SATTt(a,g) = SATT(a,g). • SATt(cr, g) is the set of interpretations for which g is (sensibly) true. SAT^a,g) = SATT(a,g) U SATt{a, g). • SATf(a, g) is the set of mappings for which g is (sensibly) false. SATn{a,g) = SATT(a,g) U SAT{{er,g). Chapter 3. The Temporal Logic TL 63 Thus each satisfaction relation defines a set of interpretations for which a desired relation-ship holds. Sets of interpretations can be represented efficiently using BDDs, as is discussed in Chapter 6. 3.4 Circuit Models as State Spaces In practice, the model-checking algorithms described in this thesis are applied to circuit models. The state space for such a model represents the values which the nodes in the circuit take on, and the next state function can be represented implicitly by symbolic simulation of the circuit. The nodes in a circuit take on high (H) and low (L) voltage values. It is useful, both computationally and mathematically, to allow nodes to take on unknown (U) and inconsistent or over-defined (Z) values. The set C = {U, L, H, Z} forms the lattice defined in Figure 1.2 on page 10. The special case of the state space being a cross-product of quaternary sets need be treated no differently than the general case (when the state space is an arbitrary lattice) as all the above definitions apply. However, it is convenient to establish some additional notation. Let S = Cn for some n. Typically in this case 71 = {U, L, H}" (node values can be unknown or have well-defined values, but cannot be in an inconsistent state). Let Gn be the smallest set with the following predicates:-• The constant predicates: f, t, T e Gn\ • Vi € {1,... ,n}, [i] e G. Here [i] refers to the i-th component of the state space. A formula g is evaluated with respect to a state by substituting for each [i] which appears in the formula the value of the z-th component of the state. Formally, Chapter 3. The Temporal Logic TL 64 when s[i] when when s[i] when Z U L H • f(*) f; t; • -L (s) =±; • T(s) = T; Note that all members of G n are simple and hence monotonic. The definition below of the T L n is based on that of TL, replacing G with Gn. The set of scalar T L n formulas is defined by the following abstract syntax: TL„ ::= Gn | TL„ A TL„ | ->TL„ | Next TL„ | T L n U n t i l T L n The semantics of T L n is patterned on Definition 3.7, replacing G with Gn; this is reproduced below for completeness. Definition 3.12 (Semantics of TL„). The semantics of TL„ formulas is defined by the following: 1. If g G Gn then Sat(a,g) = g(s0); 2. Sat(a, gAh)= Sat(a, g) A Sat(a, h); 3. Sat(a,->g) = -<Sat(a,g); 4. Sat(cr, Nextg) = Sat(a>i,g); 5. SatfagVntilh) = V ((*A Sat(a>j,g)) A Sat(a>i, h))). i=o j=o ~ • Chapter 3. The Temporal Logic TL 65 These definitions are useful because in practice properties of interest are built up from the set of predicates that say things about individual state components. Lemma 3.7 shows that restricting the basis of T L n to Gn is not a real restriction as any simple predicate can be constructed using the operators such as conjunction. Lemma 3.7 (Power of G). If p is a simple predicate over Cn, then there is a predicate gp € T L n such that p = gp. Proof. See Section A. 1. • The combined impact of Theorem 3.5 and Lemma 3.7 is that the logic TL„ is powerful enough to describe all interesting (monotonic) state predicates over Q. The definition of the symbolic version of TL„ is exactly the same as the general definitions (Definitions 3.10 and 3.11), substituting Gn for G. The set of TLn formulas in which T does not syntactically appear is known as the realisable fragment of T L n . If g is a formula in the realisable fragment of T L n , then Sat(a, g) = T only if there exists i,j such that <r,-[j] = Z. Thus, if g is a formula with this restriction, and Z does not appear in a then SAT^a, g) = SATt(a, g). This result is important since we are most interested in the SATt relation. As shown in the next chapter, there is a good decision procedure for the relation SAT^: we check whether SAT^(a,g) = SATt(cr, g), and thereby extend the decision procedure to formulas in the realisable fragment of TL„ to determine the SATt relation too. Other Application Areas Although Q is proposed here as the basis of a temporal logic, it may have other applications in computer science. In a widely quoted and influential logic text, the White Knight says (the quote is taken from an extract dealing with names and reference): Chapter 3. The Temporal Logic TL 66 'It's long,' said the Knight, 'but it's very, very beautiful. Everybody that hears me sing it — either it brings tears into their eyes, or else —' 'Or else it doesn't you know . . . ' — [31] In his commentary on this, Heath says [79]: 'An essentially vacuous claim, since it merely sets forth the logical truism, p or not-p, embodied in the "law of the excluded middle.'" In the light of the preceding discussion, the White Knight's claim, and particularly Heath's critique can be seen to be problematic. While it will be the case that hearing the White Knight sing the song makes everyone cry or not cry, as computer scientists, we are interested in making predic-tions about the behaviour of a system under study. Thus the analyses of the White Knight and Heath are somewhat simplistic, and do not take into account lack of information or inconsistent information which often occur when reasoning about the world. A far more serious instance of the same error can be found in [51] where in The Beryl Coro-net, Sherlock Holmes says: 'It is an old maxim of mine that when you have excluded the im-possible, whatever remains, however improbable, must be the truth.' In this context, Holmes is using logic to reason about a system that inherently has partial and inconsistent information. Our knowledge about such a system must reflect this: the characterisation of propositions about the world into 'impossible' and 'truth' is, as argued earlier, an inadequate logical framework for reasoning. Given the influence of this work on an important branch of logic and deduction, it is important to show the limits of a two-valued logic. And, the notion that simple characterisations may not be appropriate was recognised in work contemporaneous with [51], in an approach that is to be preferred: 'Truth is rarely pure, and never simple' [126]. Chapter 3. The Temporal Logic TL 67 3.5 Alternative Definition of Semantics Although in this chapter the semantics of TL formulas was given through the definition of the satisfaction relations, there are alternative ways in which the semantics could be given.3 A method that is useful to consider here because its underlying motivation leads to an effective verification method defines the semantics by giving for each temporal logic formula the set of sequences that satisfy it. For TL the same pattern could be used, adjusting for the fact that TL is quaternary. Definition 3.13 suggests how this could be done for TL (based on [120, p. 523]). Definition 3.13 (Alternative definition of semantics). ||flf||t = {cr : t = g(so)} if g £ G. \\g\ Ag2\\t = \\gi\\t n ||$r2||t-Ihffllt = IM|f. 0iUntilflr 2||t = U {<? £ S' : Vj : 0 < j < i, ayj £ 11fifi111 and a>{ £ ||flf2||t}. 2 = 0 — The definition of ||sr||9 for values of Q other than t is similar. • If this definition were used to give the semantics, then to ask whether a satisfies g with degree q is to ask whether a £ \\g\\q. Similar definitions could be given for satisfaction 'with degree at least q\ Practically speaking, this definition is not useful since these sets are so large that even if only finite subsequences were considered (which is often reasonable to do) the sets would be too large to compute and represent explicitly. However, the partial order representation of the state space is extremely useful. Take as an example simple predicates. If g is a simple predicate, in general, the set of sequences for which Sat(a, g) = t will be too large to compute. However, we have seen that the defining set of g, D(g), essentially captures this information: if, for example, (st, t) and (sj, T) are defining pairs, and if a is an arbitrary sequence, then a £ ||g|| t if st ^ cr0 -< s T . 3The semantics are the same; it is the way that the semantics is given that differs. Chapter 3. The Temporal Logic TL 68 In the same way as the defining sets of a simple predicates characterise the simple predicates, there are analogous structures for other TL formulas that characterise them. And in the same way the defining sets can be considered to give semantics to simple predicates when viewed as TL formulas, these analogous structures give semantics to more complicated TL formulas, and can therefore be used to test satisfaction of sequences. This is the subject of the next chapter. Chapter 4 Symbolic Trajectory Evaluation This chapter develops a model checking algorithm for TL. It is based on the idea raised in the last part of Chapter 3 that formulas of TL can be characterised by the set of sequences or tra-jectories which satisfy them. Initially, only the scalar version of TL is examined. Extension to the symbolic case is straight-forward; however, there is enough extra notation and detail to make an exposition of the scalar case clearer, which overcomes the disadvantage of a little repetition to present the symbolic case. Let the model structure of the system be M = ((<S, C ), TZ, Y). S" is the set of sequences of the state space. The partial order on S is extended point-wise to sequences. Informally, the trajectories are all the possible runs of the system; formally, a trajectory, a, is a sequence com-patible with the next state function: Vi>0 ,Y (<7 , - )E^+i . Let ST be the set of trajectories and, TZT = TZW fl ST is the set of realisable trajectories. TZT represents those trajectories corresponding to real behaviours of a system. TZT(m) = {a0ai... crm-i : a £ TZT] l S the set of prefixes of TZj of length m. Section 4.1 explores the style of verification adopted; this introduces some useful notation and definitions and guides the rest of this discussion. Section 4.2 shows that a formula of TL can be characterised by the sets of minimal trajectories that satisfy it, and furthermore shows that these sets can be used to accomplish verification. The computation of such sets is not directly 69 Chapter 4. Symbolic Trajectory Evaluation 70 possible, but Section 4.3 shows that computing approximations of the sets is feasible (and as later experimental evidence will show, forms a good basis for practical verification). Finally, Section 4.4 generalises the work to the full symbolic logic. 4.1 Verification with Symbolic Trajectory Evaluation The style of verification used in symbolic trajectory evaluation (STE) is to ask questions of the form: Do all trajectories that satisfy g also satisfy hi The formula g is known as the antecedent, and the formula h is known as the consequent. 'Satisfy' is a broad term — there are a number of satisfaction relations that can be used. Which one matches our notion of correctness? There are a number of possible ways of modelling cor-rectness, and the key issue is how to deal with inconsistent information. How correctness is modelled depends on choices the verifier makes — although guided by technical considerations, the verifier has considerable flexibility. There are two obvious ways to formalise the notion of 'trajectory a (successfully) satisfies g'. Relation (4.1) captures a more precise notion — successful satisfaction describes a situation where inconsistency has not caused a predicate to be true of a trajectory. Intuitively, it is a better model of satisfaction than Relation (4.2). However, the latter definition has some advantages: it does capture some useful information; most importantly, as shown later, there is an efficient model checking algorithm using Relation (4.2); and it is often practical to infer the former rela-tion from the latter one. For this reason, we concentrate, for the moment, on the second choice. t = Sat(a,g) (4.1) t ^ Sat(a, g) (4.2) Chapter 4. Symbolic Trajectory Evaluation 71 Corresponding to these two definitions, there are two ways of asserting correctness with respect to a formula. Definition 4.1. g=$>h if and only if Vcr € TZT, t = Sat(cr, g) implies that t = Sat(a, h). and Definition 4.2. g=^>h iff V<T G ST, t ^ Sat(a, <f>(g)) implies that t ^ Sat(a, <j>(h)). The first definition takes a very precise view of realisability. First, we only consider realisable trajectories — if there are unrealisable trajectories with strange behaviour, then these are ig-nored. Moreover, by this definition a sequence satisfying a formula with degree of truth greater than t (i.e. with degree T ) is undesirable. In practice, the model checking algorithm will check in addition that there are some realisable trajectories which satisfy the antecedent (i.e. that the verification assertion is not satisfied vacuously). I submit that this definition best captures the notion of correctness. The second definition takes a more relaxed view of inconsistent behaviour. We consider the behaviour of all trajectories, whether realisable or not, and treat the truth values t and T as sat-isfying the notion of correctness. Although, this definition is not as good a model of correctness as the first, it has the advantage that there is an efficient verification method for it. Therefore, for pragmatic reasons, we concentrate at first on Definition 4.2, which will be central in the development of the next three sections. These sections show how an efficient verification methodology for correctness assertions based on this definition can be developed. The last part of Section 4.4 shows that for circuit models, this methodology can be used to infer correctness assertions based on Definition 4.1. Chapter 4. Symbolic Trajectory Evaluation 72 4.2 Minimal Sequences and Verification This section first formalises the notion of the sets of minimal trajectories satisfying formulas, and then shows how these sets can be used in verification. The first definition is an auxiliary one: given a subset of a partially-ordered set, it is useful to be able to determine the minimal elements of the set. If B is a subset of A, then b £ B is a minimal element of B if no other element in B is smaller than b (i.e. all elements of A smaller than b do not lie in B). Definition 4.3. If A is a set, B C A, and C a partial order on A, then mini? = {b £ B : if 3a £ A 9 a C b, either a = 6 or a £}. • Definition 4.4. If 5 is a (scalar) TL formula, then min g is the set of minimal trajectories satisfying g, where ming is defined by: ming = min{cr £ ST • t ^ Sat(cr, g)} • Note that if min g C min /i, then every trajectory that satisfies g also satisfies /i. For suppose a satisfies g: then there must exist a' £ ming such that t •< Sat(a',g) and a' • a; but since ming C minh, a' £ min/i and hence t •< Sat(a', h); hence by monotonicity t -< Sat(a, h). This gives some indication that manipulating and comparing the sets of minimal trajectories that satisfy formulas can be useful in verification. Although we will be comparing sets of sequences, containment is too restrictive, motivating a more general method of set comparison. The statement 'every trajectory that satisfies g also satisfies k" implies that the requirements for g to hold are stricter than the requirements for h to hold. Thus, if a is a minimal trajectory satisfying g, a must satisfy h. Since the requirements Chapter 4. Symbolic Trajectory Evaluation 73 for g are stricter than the requirements on h, a need not be a minimal trajectory satisfying h, but there must be a minimal trajectory, a', satisfying h where a' C a. This is the intuition behind the following definition, which defines a relation over V(S), the power set of S. Definition 4.5. If S is a lattice with partial order C and A, B C <S, then A C ? B i f V & G B ,3aG A such that a C 6. • To illustrate this definition, consider the example of Figure 4.1. Assume A and B are subsets of some partially ordered set, S. Note that in this example that both A and B are upward closed. Although the definitions given here do not require this, we will be dealing with upward closed sets.1 Figure 4.1(a) depicts A. Let Am = mm A = {a, 8,7, (} be the set of minimal elements of A. Then A consists of all the elements above the dotted line. Similarly, Figure 4.1(b), de-picts B. Let Bm = min B = {n, 7} be the set of minimal elements of B. Figure 4.1(c) is the superposition of Figures 4.1(a) and (b). Note that Am C p Bm. For each element of Bm there is an element of Am less than or equal to it: a C 77 and 7 C 7 . Suppose A is the set of elements with property g, and that B is the set of elements with property h. Then m'mg = Am and m'mh = Bm. By examining the figure it is easy to see that all elements of S that have property h also have property g (h implies g). But, note that m'mh % minHowever, it is the case that ming min/i, which motivates exploring the C p relation further. JWe will be manipulating sets of trajectories and sequences that satisfy formulas; that these are upward closed follows from the monotonicity of the satisfaction relation (Lemma 3.4). Chapter 4. Symbolic Trajectory Evaluation 74 A B / /5 7 £ 7 7 C a a ( b ) S (c) A and 5 Figure 4.1: The Preorder Lemma 4.1. If 5 is a lattice with partial order • , then is a preorder (i.e., it is reflexive and transitive). Proof. Reflexivity follows directly from the reflexivity of C . Suppose that A\ZV B and B \Zj> C, and let c 6 C. (1) 3 6 e # 9 & C c £ Q p C : Vce C, 3b £ B 9 & C c (2) 3a e A3 aH b A\ZV B: Mb G B, 3a € A 9 a C 6 (3) a C c C is transitive (4) AQ-p C Since c was arbitrary. • Note that if 5 C A, then AQ? B. The following theorem shows the importance of the defini-tion of C-p . Theorem 4.2. If g and /t are TL formulas, then g=g>/j. if and only if min h C p ming. Chapter 4. Symbolic Trajectory Evaluation 75 Proof. By the definition of minimal sets, if t ^ Sat(a, g), there exists a' £ m i n g with a' C cr. g=^>h <^ => Vo- £ ST, t ^ Sar(cr, g) implies that t -< Sat(a, h) Vcr' £ m i n g implies that t -< Sat(a', h) •<=>• V a ' £ m i n g implies 3cr" £ m i n / i , with a" C cr •<=>• m i n / i C p m i n g • Although computing the minimal sets directly is often not practical, it is possible to find ap-proximations of the minimal sets (they are approximations because they may contain some re-dundant sequences). The next section shows how to construct two types of approximations to the minimal sets. A f c(/z) is an approximation of the set of minimal sequences that satisfy h, and ^(g) is an approximation of m i n g . The importance of these approximations are that (i) A*(/i) C p r t(^r) exactly when g^^>h (an analogue of Theorem 4.2), and (ii) there is an effi-cient method for computing these approximations, which we now turn to. 4.3 Scalar Trajectory Evaluation The method of computing the approximations to the minimal sets of formulas is based on sym-bolic trajectory evaluation (STE), a model checking algorithm for checking partially-ordered state spaces. The original version of STE was first presented in [25] and a full description of STE can be found in [116]. In these presentations, the algorithm is applied only to trajectory formulas, a restricted, two-valued temporal logic. This chapter generalises earlier work in two important respects. 1. It presents the theory for applying STE to the quaternary logic. 2. It presents the theory for the full class of TL. In particular it deals with disjunction and negation. Chapter 4. Symbolic Trajectory Evaluation 76 This section examines the scalar version of TL and shows how given a TL formula, a set of sequences characterising the formula can be constructed. Recall the definition of defining pair and defining set from Section 3.3.1. The defining set of a simple predicate characterises that predicate; this can be used as a building block to find a characterisation of all temporal predi-cates. By using the partial order representation, an approximation of the minimal sequences that satisfy a formula can be used to characterise a formula. These sets are called defining sequence sets. Practical experience with verification using STE has shown that there are many formulas that have small defining sequence sets. This section shows how to construct defining sequence sets using the defining pairs of sim-ple predicates as the starting point. The defining sequence sets of a formula are a pair of sets where the first set of the pair contains those sequences, a, for which t ^ Sat(a, g), and the sec-ond set contains those sequences for which f •< Sat(a, g). These sets are constructed using the syntactic structure of TL formulas. If a formula is simple its defining sequence sets are con-structed directly from the defining set of the formula. For compound formulas, these sets are constructed by performing set manipulation described below. As manipulating sets of sequences is very important, first we build up some notation for manipulating and referring to such sets. Definition 4.6 (Notation). If A and B are subsets of a lattice C on which a partial order C is defined, then A II B = {a U b : a £ A,b G B}. If g: C —> C, g(A) continues to represent the range of g, and similarly, g((A, B)) = (g(A),g(B)}. • Note that we write AII B rather than A U B since although AII B is a least upper bound (with respect to C p ) of A and B it is not the least upper-bound (this reflects the fact that C p is a preorder not a partial order). Chapter 4. Symbolic Trajectory Evaluation 77 The two fundamental operations used are join and union, and it is worth discussing how they are used. First, if we know how to characterise sequences that satisfy gi and those that satisfy g2, how do we characterise sequences that satisfy gi A g2l Let q £ Q and suppose that cr1 and cr2 are the weakest sequences such that q -< Sat(al,gi). Let a3 = a1 U a2. Clearly, q •< Sat(aJ,gi A g2). Moreover, supposeg -< Sat(a',gi A g2), then it must be that q •< Sat(a',gi) and q •< Sat(a', g2). Thus a1 Qa' and a2 • a' since the a1 are the weakest sequences such that q -< Sat(a\gi). But, since a3 = cr1 U cr2, a3 C cr'. Thus crJ is the weakest sequence satisfying 01 Ac/2. What about characterising sequences that satisfy giVg2l At first it may seem that this is analogous, and we should just use meet instead of join. However, this is not symmetric: since we are characterising a predicate by the weakest sequences that satisfy it, taking the meets will lose information. While it will be the case that if q -< Sat(cr',gi V g2) then a1 n cr2 X cr', the converse does not hold in general. This means that to characterise gi V g2 we need to use both cr1 and cr2. Since the law of the excluded middle does not hold in the quaternary logic, we need to char-acterise both the sequences that satisfy a predicate with value at least t and those that satisfy a predicate with value at least f . Definition 4.7 (Defining sequence set). Let# £ TL. Define the defining sequence sets of g as A(g) = (At(g), A f (g)), where the Aq(g) are defined recursively by: l.If g is simple, Aq(g) - {sXX... : (s, q) 6 Dg, or (s, T) £ Dg}. This says that provided a sequence has as its first element a value at least as big as s then it will satisfy g with truth value at least q. Note that A'(^) could be empty. 2.AG/1 V g2) = (A'Ofc) U A}(g2), A f (9l) 11 A f (g2)} Chapter 4. Symbolic Trajectory Evaluation 78 Informally, if a sequence satisfies gV h with a truth value at least t then it must satisfy either g or h with truth value at least t. Similarly if it satisfies g V h with a truth value at least f then it must satisfy both g and h with a truth value at least f. This is motivated by the fact that for q = f, t, a satisfies g with truth value at least q if and only if it satisfies ->g with truth value at least ->q. 5. A(Next g) — shift A(g), where shift(s0Si ...) = Xs0s\ ... s0sis2... satisfies Next g with truth value at least q if and only if sis2 . . . satisfies g with at least value q. 6. A( g i Untilg 2) = (Ak(gi Until# 2), A ^ Until£r 2)), where •A*(>i Untilg 2) = U g 0 ( A t ( N e x t V ) I I . . . II A ^ N e x t ^ 1 ) ^ ) II A^Next 8 '^)) • A f ( c / i U n t i l e ) = n~ 0 (A f (Next°^ 1 ) U . . . U A f (Next^- 1 )^) U A f (Next^ 2 )) Recall that Nextkg = g if k = 0 and Nextkg = Next Next^ - 1^ otherwise. Here we consider the until operator as a series of disjunctions and conjunctions and apply the mo-tivation above when constructing the defining sequence sets. 3.A(<7! Ag2) = (A'CsO II At(g2),Af(g1) U Af(</2)) This case is symmetric to the preceding one. 4 .A(^) = (A f ( 5 r) ,A t (9)) Note that it may be that 81, S2 € Aq(g) where 81 C S2. As a practical matter it would be prefer-able for only 51 to be a member of Aq(g). However, this redundancy does not affect what is presented below. Chapter 4. Symbolic Trajectory Evaluation 79 An important consequence of Definition 4.7 is that for each formula g of TL, A(g) charac-terises g: all sequences that satisfy g must be greater than one of the sequences in At(g). The lemma below formalises this (the proof is in Section A.2). Lemma 4.3. Let g e TL, and let a G <SW. For q = t, f, q X Sat{a, g) iff 3S9 £ A % ) with S9 C a. • 4.3.1 Examples The constant predicates have very simple denning sequence sets. A(t) = ({XX..},0) A ( 1 ) = (0,0) A(f) = (0, {XX... }) A (T) = ({XX... }, {XX... }} Every sequence satisfies the predicate t with truth value t, and no sequence satisfies the pred-icate t with truth value f or T . Similarly, no sequence satisfies f with truth value at least t, while all sequences satisfy f with truth value f. Note that A(t) = A(-if) (indeed, it would be disconcerting if this were not the case). Example 4.1. Suppose that A(g) = ( A g , B g ) and A(h) = ( A h , B h ) . Then, A(g V h) = A ( i ( - > # A - i / i ) ) . To facilitate the proof, let rev{A, B) - {B, A ) . A(-i(->g A ~~~>h)) — rev A{->g A ->h) = rev(A\-^g) II A\->h), Af(-*g) U Af(->h)) = rev(At(g)UAr(h),At(g)UAt(h)) = (At(g)UAt(h),Af(g)UAf(h)) = A(gVh) Chapter 4. Symbolic Trajectory Evaluation 80 Example 4.2. x = y is short for (a; A y) V ((-"a;) A(->y)). A(x = y) = (A*(xAy V (-xA-y)), A f (xAy V ( - . I A - " ! / ) ) ) = (A k (xAy)U A*((-.xA-.y)), A f (xAy) II A f(-.xA--y)) = ((A'(x) H Ak(y)) U (A'(-.x) II A ' ^ y ) ) , (A f (x) U A f(y)) H (A f (-.x) U A f (-.y))> = ((A t (x)nA t (y))U(A f (x)nA f (y)) , (A f (x )UA f (y ) )n(A t (x )UA t (y))) • Example 4.3. Let <S* = C 3 ; this models a circuit with three state holding components. The formula g = (([1] V [2]) = f) asks whether it is true that neither component 1 nor component 2 has the H value. A(f) = <0 ,{(U,U,U).. .}> A([1]) = ( { ( H , U , U ) ( U , U , U ) . . . } , { ( L , U , U ) ( U , U , U ) . . . » A([2]) = ( { ( U , H , U ) ( U , U , U ) . . . } , { ( U , L , U ) ( U , U , U ) . . . » A([1]V[2])= ({(H,U,U)(U,U,U)... ,(U,H,U)(U,U,U)...}, {(L,L,U)(U,U,U)...}} A 4([l] V [2]) = f) = A k([l] V [2]) II 0 U A f([l] V [2]) II {(U, U, U). . . } = A f([l] V [2]) = {(L,L,U)(U,U,U)...} Chapter 4. Symbolic Trajectory Evaluation 81 A f([1] V [2]) = f) = (Af([1] V [2]) U {(U, U, U) . . . }) II (A'([l] V [2]) U 0) f ([ l [ ) , , }  '([1]V.[2]. ) = ( { ( L , L , U ) ( U , U , U ) . . . } U { ( U , U , U ) . . . } ) II {(H,U,U)(U,U,U).. . , (U,H,U)(U,U,U).. .} = { ( L , L , U ) ( U , U , U ) . . . , ( U , U , U ) . . . } n { ( H , U , U ) ( U , U , U ) . . . , ( U , H , U ) ( U , U , U ) . . . } = { ( Z , L , U ) ( U , U , U ) . . . , ( L , Z , U ) ( U , U , U ) . . . , (H,U,U)(U,U,U).. . , (U,H,U)(U,U,U).. .} Note that A'([l] V [2]) \ZV (A f ([1] V [2]) U {(U, U, U) . . . }) II (A*([l] V [2])) showing the 4.3.2 Defining Trajectory Sets The defining sequence sets contain the set of the minimal sequences that satisfy the formula. It is possible to find the analogous structures for trajectories — we can find an approximation of the set of minimal trajectories that satisfy a formula. This section first shows how, given an arbitrary sequence, to find the weakest trajectory larger than it. Using this, the defining trajec-tory sets of a formula are defined. Finally, Theorem 4.5 is presented, which provides the basis for using defining sequence sets and defining trajectory sets to accomplish verification based on Definition 4.2. Definition 4.8. Let G = s0sis2 Let r(<r) = t0tit2... where: redundancy in defining sequence sets. • Y(ij_i) U Si otherwise when i = 0 Chapter 4. Symbolic Trajectory Evaluation 82 t0tit2... is the smallest trajectory larger than o. s0 is a possible starting point of a trajectory, so t0 = so- Any run of the machine that starts in s0 must be in a state at least as large as Y(s 0 ) after one time unit. So tx must be the smallest state larger than both sx and Y(s 0 ) . By definition of join, ti = Y(s 0 ) U Si = Y ( t 0 ) U si. This can be generalised to U = Y(U-i) U s,-. • In the same way that there is a set of minimal sequences that satisfy a formula, there is a set of minimal trajectories that satisfy a formula. A set that contains this set of minimal trajectories can be computed from the defining sequence sets. The defining trajectory sets are computed by finding for each sequence in the defining sequence sets the smallest trajectory bigger than the sequence. Definition 4.9 (Defining trajectory set). T(g) = (T^g), Tf(g)), where T«(g) = {r(a) : a £ A%)}. • Note that by construction, if T9 £ Tq(g) then there is a 59 £ Aq(g) with Sg C T9. T(g) char-acterises g by characterising the trajectories that satisfy g. This is formalised in the following lemma which is proved in Section A.2. Lemma 4.4. Let g £ T L , and let a be a trajectory. For q = t,f,q^ Sat(a,g) if and only if 3T9 £ Tq(g) with r9 Qa. • The existence of defining sequence sets and defining trajectory sets provides a potentially effi-cient method for verification of assertions such as g=^>h. The formula g, the antecedent, can be used to describe initial conditions or 'input' to the system. The consequent, h, describes the 'output'. This method is particularly efficient when the cardinalities of the defining sets are small. This verification approach is formalised in Theorem 4.5 (which is proved in Section A.2). Section 4.1 showed how this result is used in practice. Recall that these antecedent, consequent Chapter 4. Symbolic Trajectory Evaluation 83 pairs are called assertions. Theorem 4.5. If g and h are TL formulas, then A}(h) Q-p T*(<7) if and only if g=^>h. • Some formulas have small defining sequence sets with simple structure. Definition 4.10. If g £ TL, and 389 G A\g) such that VcT £ A*(flf), S9 C S, then 89 is known as the defining sequence of g. If the 59 is the defining sequence of g, then T9 = T(59) is known as the defining Finite formulas with defining sequences are known as trajectory formulas. Seger and Bryant characterised these syntactically (see Section 2.4). Two useful special cases of Theorem 4.5 should be noted. First, if A is a formula of TL with a well-defined defining sequence 5A, and h G TL, then G A k (/ i ) , 5 C rA if and only if, for every trajectory a for which t •< Sat(a, A) it is the case that t X Sat(cr, h). Second, let A and C be formulas of TL with well-defined defining sequences 5A and 5C. Then 8C C rA if and only if, for every trajectory a for which q •< Sat(cr, A) it is the case that q X Sat(a, C). This is essentially the result of Seger and Bryant generalised to the four valued logic. 4.4 Symbolic Trajectory Evaluation The results of Section 4.3 can easily be generalised to the symbolic version of TL. The con-structs used in the previous section such as defining set and so on all have symbolic extensions. Each symbolic TL formula is a concise encoding of a number of scalar formulas; each inter-pretation of the variables yields a (possibly) different scalar formula. To extend the theory of trajectory of g. • Chapter 4. Symbolic Trajectory Evaluation 84 trajectory evaluation, symbolic sets are introduced; these can be considered as concise encod-ings of a number of scalar sets. Symbolic sets can be manipulated in an analogous way to scalar sets. Using this approach, the key results presented above extend to the symbolic case. This section first presents some preliminary mathematical definitions and generalisations and then presents symbolic trajectory evaluation. The verification conditions are extended to the symbolic case. Given two symbolic formulas g and h we are interested in for which interpretations, (f>, it is the case that for all trajectories, a, if u satisfies g, then a also satisfies h. Again, both the =g> and ==t> relations are considered. 4.4.1 Preliminaries Definition 4.11. \g=Z>h)= {(f> e Vcr <E 1ZT,t = Sat(a,(f>(g)) implies that t = Sat(a,(f)(h))}. • Ideally such verification assertions should hold for all interpretations of variables. Definition 4.12. (= <] g=t>h [> denotes (\g=s>h [>= $. • Note that |= <] g=S>h \ if and only if Vcr £ 7lT,SATt(a, g) C SATt(a,h). An alternative approach is to treat inconsistency more robustly (which is what happens in STE defined on a two-valued logic). We could use these definitions. Definition 4.13. <] g=^>h [> = {(f) € $ : Vcr e Sr, t ^ Sat(a, <f>(g)) implies that t ^ Sat(a, (f>(h))} • and Chapter 4. Symbolic Trajectory Evaluation 85 Definition 4.14. |= <] g=$>h [> denotes <] g==$>h $. • Note that |= <] g=i>h ) if and only if Va £ <Sr, SAT^a, g) C SAT^ (a, /*)). Symbolic sets are now introduced. Definition 4.15. Define £, the boolean subset of TL, by £ ::=t | f | V | £ A £ | - 5 This definition is used in this chapter and Chapter 5. • Recall that an interpretation of variables can be considered as a function mapping a symbolic TL formula to a scalar one. In particular, if <j> is an interpretation of variables, and a 6 £, 4>(a) £ {f,t}. In what follows, let S be a lattice over a partial order C ; this induces a lattice structure on <SW; in turn, C p is the induced preorder on V(SW) defined earlier. Definition 4.16. A symbolic set over a domain 7^(5^) is one of 1. A £ ViS"); 2. a -» A where a £ £; 3. A i U A 2 , where A i , A 2 are symbolic sets; 4. A i h A 2 , where A i , A 2 are symbolic sets; or 5. AiIIA 2 , where A x , A 2 are symbolic sets. • Chapter 4. Symbolic Trajectory Evaluation 86 Each symbolic set represents a number of sets. Each interpretation of variables, yields a scalar set contained in V(SW). Given an interpretation of variables, there is a natural interpre-tation of symbolic sets, given below. Definition 4.17. Let 0 G $ be given. 1. <j>{A) = A for all A G V(SW); 2. <f>(a -> A) = {x G A : 0(a) = t}. Thus if a evaluates to t, then a -> A is the set A, otherwise it is the empty set. 3. A JI B is defined by 0(A L\B) = 0(A) II <f>(B). 4. I f / : V(SUJ)m -» 7?(<Su;), then the symbolic version of / is defined by 0 ( / ( A 1 , . . . , A n ) ) = /(cA(A 1 ) , . . . ,0(A r i )) . These definitions can be used in extending set operations such as set union, as well as for more general functions, for example in extending Definition 4.9 to give a definition of symbolic defining trajectory sets. 5. A tv B = {(j) G $ : 0(A) nv <f>(B)}. • The following lemma shows that these definitions are sensible. Lemma 4.6. Let A, B, C be symbolic sets over domain V(SW). 1. (AtAlIB) = $. 2. If (A C C) = $ and ( £ t C) = $, then (ARB E C ) = $. Chapter 4. Symbolic Trajectory Evaluation 87 Proof. Let 0 be arbitrary. (1) 0(A) C 0(A) II 0(B) (2) 4(A) n <f>(ATl B) Property of II. Definition. Since 0 is arbitrary part 1 follows. • (3) 0(A), 0(B) C 0(C) (4) 0(A) II 0(B) • 0(C) (5) 0(ALIB) C0(C) From (3), by property of join. Hypothesis 2. Definition. Since 0 is arbitrary, part 2 follows. 4.4.2 Symbolic Defining Sequence Sets Given this mathematical machinery, symbolic defining sequence sets can now be defined. The definition of defining sequence sets (Definition 4.7) must be extended by using the symbolic versions of set union and join etc. In addition, one more part must be added to the definition to take into account formulas of TL containing variables. For completeness, this definition is given below. Definition 4.18 (Extension to Definition 4.7). Let g € TL. Define the symbolic defining sequence sets of g as A(g) = (At(fif), A f (g)), where the Ag(g) are defined recursively by: 1. If g is simple, A"(g) = {sXX ... :(s,q)e Db, or (s, T ) £ Dg}. 2. A(gi V g2) = (A'teOuA'fo), Af(&)nAf(fli)) 3. A(gi Ag2) = (A t^i)liAt(0 2),A f^ 1)UA f^ 2)) 4. A(^ ) = (A,(flf),At(^)> 5. A(Next g) = shiftA(g) 6. A(gi Until5f2) = (A^ flf! Un t i l g2),Af(gi Un t i l g2)), where Chapter 4. Symbolic Trajectory Evaluation 88 • A^firi Unt i lg 2 ) = U^0(A t(Next0^1)LI.. . IlA^Next^-^UA^Next^)) • A f(^i Unt i l $r2) = H^0(Af(Next°<7i)U • • • uAf (Next( i _ 1)$fi)UA f(UexVg 2)) 7. Ifu € V, A(v) = {v-+ { X X . . . },--«'-)• {XX. . . } ) . Ifcf>(v) = t,then 0(A(u)) = A(t), and if <f>(v) = f, then </>(A(u)) = A(f). The extension of the definition of defining trajectory set is straightforward. Definition 4.19 (Symbolic denning trajectory set). f<(g) = i(A(g)). The main result of symbolic trajectory evaluation is based on Theorem 4.5. It says that the set of interpretations t\ g=^>h \ (those interpretations, 4>, such that <f>(g)=^xf>(h). is exactly the same as the set of interpretations for which At(h) C.T T*(<7). So, if we can compute one, then we can compute the other. Theorem 4.7. Let g, h be TL formulas. <] g=^h |>= (Ak(A) t v t^g)) Proof. (J g=$>h |> = {4 € $ : Vcr e ST, t •< Sat(cr, (b(g)) implies that t -< Sat(a, (f)(h))} = {(f) G $ : A*(<£(fc)) Cp T\(f){g))} (Theorem 4.5) = A* (A) tv f\g) (By definition.) This is an important result, because efficient methods of computing these symbolic sets and per-forming the trajectory evaluation exist, and have been implemented in the Voss tool discussed in Chapter 6. This forms the basis of the verification presented here. Chapter 4. Symbolic Trajectory Evaluation 89 4.4.3 Circuit Models When S = CN and TZ = {U, L, H}71, and only the realisable fragment of TL„ (those T L n for-mulas not syntactically containing T ) are considered, computing these verification results is simplified. In the rest of this section we only consider the realisable fragment of TL„. Two important properties of the realisable fragment of TLN are given in the next lemma. Lemma 4.8. 1. cr G TZT if and only if Z does not appear in cr (for all ^ Z ) . 2. If cr e TZT and g is in the realisable fragment of TL„, SATT(a,g) = 0 and SATt(a,g) = SAT^g) Proof. The proof of (1) comes from the definition of TZ. For (2), recall from Section 3.4 that Sat(cr, g) = T only if g contains a subformula g' e Gn which is either the constant predicate T or if Z appears in a. • We compute |= ^ g=t>h as follows. First, compute ^(g). It is easy to determine whether Tt(g) C TZT using Lemma 4.8(1). If not, then there are inconsistencies in the antecedent which should be flagged for the user to deal with before verification continues. Thus we may assume that T\g) C TZT. \= 4 9=^h \ = Vcr e S T , {SATtt(a, g) C SAT^a, h)) (By definition) = W c r e 7e r, (SA7V(cr, C SATtt(a, h)) (since 7e r C S T ) = ^ V c r € 7eT, (SA7t(<r, g) C 5AT t (cr , />)) (By Lemma 4.8(2).) This result is useful because in this important special case, efficient STE-based algorithms can Chapter 4. Symbolic Trajectory Evaluation 90 be used. The rest of this thesis uses this result implicitly. The main computational task is to determine f= § g=^>/i \. By placing sensible restrictions on the logic used and checking for inconsistency in the defining trajectory set of the antecedent, we can then deduce |= \ g=s>h ) from|= $g=£>h). Chapter 5 A Compositional Theory for TL 5.1 Motivation Although STE is an efficient method of model checking, it suffers from the same inherent per-formance problems that other model checking algorithms do. Enriching the logic that STE sup-ports, as is proposed in previous chapters, potentially exacerbates the problem. A primary thesis of this research is that a compositional theory for TL can overcome performance limitations of automatic model checking. Compositionality provides a method of divide-and-conquer: the problem can be broken into smaller sub-problems, the sub-problems solved using automatic model checking, and the overall result proved using the compositionality theory. This chapter presents the compositional theory for trajectory evaluation, which is a set of sound inference rules for deducing the correctness of verification assertions. Chapter 6 discusses the develop-ment of a practical tool that can use this compositional theory — this allows the use of an inte-grated theorem prover/model checker with useful practical implementations. As discussed in Chapter 1, the focus of this theory is property composition: Section 5.2 presents compositional rules for TL; Section 5.3 presents additional compositional rules for T L n ; and practical considerations are presented in Section 5.4. Structural composition is briefly discussed in Section B . l . 91 Chapter 5. A Compositional Theory for TL 92 5.2 Compositional Rules for the Logic This section presents the main compositional theory with each rule being presented and proved in turn. The compositional theory is developed for the =^> relation. In general, the theory does not apply to the =D> relation since the disjunction, consequence, transitivity and until rules do not hold for this relation. However, the other composition rules do apply for =t> (in the related theorems below, replacing =g> with = o and ^ with =, and considering only trajectories in TZr will yield the desired result). Moreover, as shown in Section 5.3, the full compositional theory does hold for the =r> relation for the important realisable class of T L n The circuit shown in Figure 5.1 will be used in the rest of this section to illustrate the use of the inference rules. The circuit is very simple and can easily be dealt with directly by STE, but the smallness of the circuit helps the clarity of the example. A unit-delay model is used for inverter and gate delays. Notation: [B] is the simple predicate which evaluates to T when the state component B has the value Z, t when B has the value H, f when B has the value L, J_ when B has the value U. Note that except for the Specialisation Rule, all of the proofs are for the scalar case only as this simplifies the proof. However, as a symbolic formula is merely shorthand for a set of scalar formulas, the rules for the symbolic case follow directly in all cases. Figure 5.1: Example Chapter 5. A Compositional Theory for TL 93 5.2.1 Identity Rule This rule is a trivial technical rule. However, it turns out to be useful in practice where a se-quence of inference rules will be used to perform a verification. In the practical system which implements this theory, the proofs are written as a program script, and this rule is useful to ini-tialise the process. Its advantage is that it makes the program slightly more elegant. Theorem 5.1. 5.2.2 Time-shift Rule The time-shift rule is important because it allows abstraction from the exact times things happen at. This may reduce the amount of detail that the human verifier will have to deal with, and more importantly, allows verification results to be reused a number of times. In practice this is very important in making verification efficient. Lemma 5.2. Suppose c/=g>A. Then Next g=^>Next h Proof. Let a = s0sis2... be a sequence such that t •< Sat(a, Next g). (1) t -< Sat(a>i,g) By definition of the satisfaction relation. (2) t ^ Sat(a>i,h) Sinceg==$>h. (3) t X 5a?(a>0,Next/i) Definition of satisfaction of Next. (4) Thus Next g==$>Next h. For all g € TL, g=^>g. Proof. Let t •< Sat(a,g). Clearly then t •< Sat(a,g). Hence g=^>g. • • Theorem 5.3 follows directly from Lemma 5.2 by induction. Chapter 5. A Compositional Theory for TL 94 Theorem 5.3. Suppose g=$>h. Then Vi > 0, Next*c/=^>Next'/i. Example 5.1. In the circuit of Figure 5.1, using STE, it can be shown that [5]=g>Next (-• [£)]). Using The-orem 5.3, we can deduce that Vi > 0, Next*[£]==§> Next (-'[£]). The requirement of Theorem 5.3 that i > 0 is necessary: in general it does not hold when i < 0. For example, in our circuit we can prove that Next1[D]=g>Next2[F]. However, it is not the case that Next°[Z>]=^>Next1[JE]. In the former case, the node C has the value H at time 1 because A is connected to ground; at time 0 we know nothing of the value of C. 5.2.3 Conjunction Rule Conjunction and disjunction allow the combination of separately proved results. This is partic-ularly useful where properties of different parts of the system being verified have been proved and need to be combined. Given two results <7I=^>/JI and g2^^>h2, the two antecedents are combined into one antecedent and the two consequents are combined into one consequent. Us-ing the conjunction rule, combination is done using the A operator, and using the disjunction rule, combination is done using the V operator. There is no need for the gi and hi to be 'inde-pendent', i.e. they can share common sub-formulas. Theorem 5.4. Supposegl=^>hi andg2=^>h2. Then gi A g2=^hi A h2. Chapter 5. A Compositional Theory for T L 95 Proof. Let a £ and suppose t ^ Sat(a, gxAg2). (1) t •< Sat(a,gx) A Sat(a,g2) Definition of Sat(a, gx A g2). (2) t r< Sat(a,gi), i = 1,2 Lemma 3.2(2). (3) t ^ Sat(a,hi), i = 1,2 Since gi=^>hi, i — 1,2. (4) t X Sat(a, hi) A Sat(a, h2) Lemma 3.2(2). (5) t ^ Sat(a,hiAh2) Definition of Saf(<r, /ti A /i 2). As cr is arbitrary, c/i A c/2 =^>/ii A / i 2 - Ll Example 5.2. In the circuit shown in Figure 5.1, we can show using STE that - . [ £ ] = » N e x t [D] - i [A]=»Next [C\. Using Theorem 5.4 we have that: ,[A] A ~>[B] =^>Next [C] A Next [£>]. 5.2.4 Disjunction Rule Theorem 5.5. Supposeg\=^hi andg 2^^>h 2. Then #i V g2=^>hx V / i 2 . Proo/i Let c £ Sw and suppose t ^ Sat(a, gx V g2). (1) t ^ Sat(a,gx)\J Sat(a,g2) Definition of Sat(a,gx V c/2). (2) t X Sat(a,gi), fori = 1 orz = 2 Lemma 3.2(1). (3) t r< 5a?(cr, for i - 1 or i = 2 Since gi=g>/ji, i = 1,2. (4) t X Sat(a,hx)y Sat(a,h2) Lemma 3.2(1). (4) t ^ Sat(cr,hiV h2) Definition of Sat(a, hx V / J 2 ) . As a is arbitrary, #i V g2=^hx \J h2. • Chapter 5. A Compositional Theory for TL 96 Example 5.3. In the circuit in Figure 5.1, we can use STE to show that ->[D]=^Tlext->[E] -.[C]=»Next-•[£]. Using Theorem 5.5, we have that [->[D] V -n[C])=»Next-•[£]. Although the consequents of both premisses used here in the disjunct rule are the same, in gen-eral they may be different. 5.2.5 Rules of Consequence The rules of consequence have two main purposes: • Rewriting antecedents and consequents into syntactically different but semantically equiv-alent forms (see Example 5.4); • Removing information which is not needed for subsequent steps in the proof so as to re-duce clutter (see Example 5.5). The next lemma is an auxiliary result. Informally it says that if the defining sequence sets of g and h are ordered with respect to each other, then every sequence that satisfies h also satisfies 9-Lemma 5.6. Suppose A'(flf) Qv Ak(h) and t -< Sat(a, h). Then t -< Sat(cr,g). Chapter 5. A Compositional theory for TL 97 Proof. (1) 38 G A* (h) 9 8 : f f and t ^ Sat(8,h) Lemma4.3. (2) 38' <E A'(flf) 3 8'QS Definition of Qr . (3) t •< Sat(8',g) Lemma 4.3. (4) C o - Transitivity of (1) and (2). (5) t -< Sat(a,g) From (3) and (4) by Lemma 4.3. • The intuition behind this is that if At(g) C p Ak(h), then any sequence that satisfies h will also satisfy g. Given this result, the rules of consequence are easy to prove. Theorem 5.7. Suppose g=^>h and At(g) C p A'(flfi) and A ' ( / i i ) C p A*(/i). Then <?i=^S>/ii. Proof. Suppose a is a trajectory such that t -< Sat(a, gi). (1) t -< Sat(a,g) Lemma 5.6. (2) t •< Sat(a,h) g=$>h. (3) t -< Sat(a,hi) Lemma 5.6. (4) gf1=^>/i1 Since a is arbitrary. Example 5.4. Using this theorem to rewrite one assertion into a semantically equivalent one can be illustrated by examining the result of Example 5.2: Since conjunction can be distributed over the next-time operator, as Next [C] A Next [D] = Next ([C] A[£>]), this can be rewritten as: • (~>[A] A --[5])=»(Next[C] A Next [D]). (->[A\ A -.[£])=»Next ([C]A[D]). Chapter 5. A Compositional Theory for TL 98 Example 5.5. In the circuit of Figure 5.1, we can show using STE that ([B] A Next [B])==^Next1(-.JD) A Next2(-./J). Using Theorem 5.7, we can refine this to ([B] A Next [£])==§>Next1(-.Z)). 5.2.6 Transitivity The rule of transitivity is an analogue of the transitivity rule of logic: it gives the condition for deducing from c/i=g>/^ i and g2=^>h2 that gi=^>h2. This condition is that A ' (g2) Qv A^g i ) ! ! A*(/ii). Note that this is a weaker condition than showing that A * ^ ) Cp A*(/ii). Theorem 5.8. Suppose gi=^>hi and g2==$>h2 and that Al(g2) Cp ^ (gi) U At(h1). Then g1=^>h2. Proof. Suppose a is a trajectory such that t -< Sat(a, gi). (1) t X Sat(a,hi) 5i=§>/*i (2) t X Sat(a, giAhi) Definition of 5af(cr, giAhx). (3) 35 e A*(0i A/i i) 9 SQa Lemma 4.3. (4) A * ^ ! A hi) = At(gi) II A t ( / i 1 ) By definition of A*. (5) 35'e A*(^ ) 3 <y E 5 A ' G f e ) ^ A ^ O H A * ^ ) . (6) S'Qa Applying transitivity to (3) and (5). (7) t X Sat{5',g2) (8) t r< Sat(a,g2) From (5) by Lemma 4.3. From (6), (7) by monotonicity. Chapter 5. A Compositional Theory for TL 99 (9) t -< Sat(a,h2) g2=^h2. (10) ^ 1 = ^ > / i 2 Since a was arbitrary. • Example 5.6. Using STE, we can prove the following about the circuit of Figure 5.1: • - . [ B J^^Next 2^] • Next 2[£]^>Next 3 HF]) Then, using Theorem 5.8, we can deduce ->[5]=^>Next3(-i[F]). 5.2.7 Specialisation Specialisation is one of the key inference rules. By using specialisation it is possible to generate a large number of specific results from one general result. With STE, it is often cheaper to prove a more general result than a more specialised result. Thus in some cases, it may be cheaper to generate a more general result than needed and then to specialise this general result than to use STE to obtain the result directly. Specialisation also promotes the re-use of results. It is often used together with transitivity: before applying transitivity to combine two assertions, one or both of the assertions are first specialised. For example, a general proof of the correctness of an adder is straightforward to obtain us-ing trajectory evaluation, even for large bit widths. Such a proof may show that if bit vectors representing the numbers a and b are given as inputs to the circuit, then a few time steps later the bit-vector representing a + b emerges as output. There are two reasons why one might want to specialise such a proof: • If the adder is part of a large circuit the actual inputs may be bit-vectors representing com-plex mathematical expressions. Since STE relies on representing bit-vectors with BDDs, if the BDDs needed to represent these mathematical expressions are very large, it may not Chapter 5. A Compositional Theory for TL 100 be possible to use STE to prove that the adder works correctly for the particular inputs. The solution is to prove that the adder works correctly for the general case, and then to specialise the result appropriately. • The adder may be used a number of times in a computation, each with different input values. Instead of proving the correctness of the circuit for each set of inputs, the proof can be done once and then the specific results needed can be obtained by specialisation (and, probably, time-shifting). Recall the definition of the boolean subset of TL presented as Definition 4.15. Definition 5.1. Define S, a subset of TL, by S ::= t | f | V | SAS | ->£ Definition 5.2. 1. £: V —y S is a substitution. 2. A substitution £: V —> S can be extended to map from TL to TL: • i{9i^92) = £(0i)Af(o 2) • £ H ? ) = - to) • e(Nexto) = Next (((g)) • ((g Until h) = ((g) JJntil ((h) • Otherwise, if g is not a variable, ((g) = g If T is the assertion |= (] g=$>h [> then £(T) is the assertion (= ^ £{g)=$>£(h) [>. Chapter 5. A Compositional Theory for TL 101 Lemma 5.9 (Substitution Lemma). Suppose |= ^  g=g>/i I and let £ be a substitution. Proof. Let (f> by an arbitrary interpretation of variables and a be an arbitrary trajectory such that Sat{aA^{g))). (1) Let 4' = 4> o £ (2) t ^ Sat(a^'{g)) Rewriting supposition. (3) 4' is an interpretation of variables By construction. (4) t •< Sat(a,<f/(h)) (5) t =< Sat(a,<f>(((h))) Rewriting (4). (6) h l ( ( f i ) ^ ) l 0 and a were arbitrary. • Example 5.7. Suppose that part of a circuit multiplies two 64-bit numbers together and then compares the result to some 128-bit number. Let c be the boolean expression that this part of the circuit com-putes — in general it will not be possible to represent c efficiently since the BDD needed to represent c will be extremely large. Now suppose that the next step in the circuit is to invert c. We may wish to prove that Tx= \=\[B) = c =$> Next ([£•] = -.c) \ is true. Given that c is so large, it will not be possible to use STE directly to do this. But, let T2= \=/\[B] = a=^Hext{[D} = ^a)\ where a is a variable (an element of V). Proving that T2 holds using STE is trivial. Having proved T2, we can easily prove Ti using Chapter 5. A Compositional Theory for TL 102 Lemma 5.9. Let { v otherwise c when v = a be a substitution. Note that Ti = £(T2). As T2 holds, and as £ is a substitution, by Lemma 5.9, T i holds too. Although substitution is useful, in practice sometimes a more sophisticated transformation is also desirable. Lemma 5.10 shows that it is possible to perform a type of conditional substitu-tion. A specialisation is a conjunction of conditional substitutions which allows us to perform different substitutions in different circumstances. An example of the use of specialisation is given in Chapter 7. Definition 5.3. S = [(ei, £ i ) , . . • , (e n , £„)] where each £ is a substitution and each e,- 6 S, is a specialisation. If 0 € TL, then -(</) = A?=1(e,- &(</)). Lemma 5.10 (Guard lemma). Suppose e e S and (= (] g=g>/* Then |= <] (e 5r)==£>(e / J ) Proof. Suppose t •< Sat(a, e =>- g). By the definition of the satisfaction relation, either: (i) t -< Sat(a, -ie). In this case, by the definition of the satisfaction relation, t •< Sat(a, e =>• /i). (ii) t ^ 5af(cr, In this case, by assumption Sat(a, h). Thus, by definition of the satisfaction relation, t -< Sat(cr, e =>- h). As a was arbitrary the result follows. • Chapter 5. A Compositional Theory for TL 103 Theorem 5.11 (Specialisation Theorem). Let H = [ ( e i , £ i ) , . . . , (en, £„)] be a specialisation, and suppose that |= ^ g=^>h \. Proof. (3) |= 4 A?=1 (e,- A"=1(e; =>• } Repeated application of Theorem 5.4. (4) |= ^ ) By definition. • 5.2.8 Until Rule Theorem 5.12. Suppose ^ r1=g>/i1 and g2=^>h2. Then gi Until c/2=^>/ii Until h2. Proof. Let cr = s0sis2... be a trajectory such that t -< Sat(a,gi Untilg 2). (1) For* = l,... ,n,\= Ui(9)=3>ti(W By Lemma 5.9. (2) By Lemma 5.10. (1) 3*9 t -< AJ~o Sat(cr, Next^) A Sat(a, Next'^ 2) (2) t ^ Sar(cr, Next'#2) and t ^ Sat(ja, NextJ^i), j = 0,... , i — 1 By definition of Sat. Definition of conjunction. (3) t X Sar(cr, Nexts7i2) and • t ^ 5af(cr, Next-'/ii), j = 0,... , i - 1 (4) t -< Apo Sat(a, Next^j) A taf(cr, Next*/»2) (5) t -<Sat(a:hi Until /i2) Assumptions and Theorem 5.3. Definition of Sat. Definition of Sat. (6) 01 Until c/2=g>/ii Until h2 Since a was arbitrary. Corollary 5.13. Suppose g=^>h: then Existsg=^>Exists/j and Globalg=^Global h. Chapter 5. A Compositional Theory for T L 104 Proof. The first result follows directly from the theorem using the definition of the Exists operator and the fact that t = » t . Let a be an arbitrary trajectory such that t •< Sat(a, Global g). (1) f •< Sat(a, Exists (->g)) Expanding the definition of Global . (2) Wi,f •< Sat(a>i,->g) Definition of Sat for Exists . (3) V i , t X Sat(a>i,g) Definition of Sat for ->, Lemma 3.1. (4) V i . t X Sat(a>i,h) g=£>h (5) V i , f ^ Sat(a>i,->h) Definition of Sat for - i . (6) f X Sat(a, Exists (->h)) Definition of Sat for Exists . (7) t X Sat(cr, Global g) Definition of Sat for Global . Which concludes the proof since a was arbitrary. • Example 5.8. Consider again the circuit in Figure 5.1. Using STE, it is easy to prove [£]==^Next (->[£)]). Using Corollary 5.13, we can deduce that Global [B]=g>Global (Next (->[D])). 5.3 Compositional Rules for T L n For the realisable fragment of TL„, the compositional theory above applies to the =t> relation as well as the =g> relation. A key result used is Lemma 4.8. The remainder of the section assumes that we are dealing solely with the realisable fragment of TL„. Only the statements of theorems are given as the proofs are very similar to or use the proofs in the previous section so the proofs are deferred to Section A.3. Table 5.2 summarises the rules. Chapter 5. A Compositional Theory for T L 105 Theorem 5.14 (Identity). For allg G TL„, g=Og. Theorem 5.15 (Time-shift). Suppose g=oh. Then Wt > 0, Next'g=> Next Theorem 5.16 (Conjunction). Suppose gi=t>h1 and g 2=t>/i 2. Then gi A g2=$>h\ A h2. Theorem 5.17 (Disjunction). Suppose gi=i>hi and g2=o>h2. Then gi V g2=S>hi V h2. Theorem 5.18 (Consequence). Suppose g = o / i and At(g) Cp At(g1) and A*(/ii) Cp A f c(/i). Then ^ =o/e-i. Theorem 5.19 (Transitivity). Suppose c/i=o/ii and g2=>h2 and that At(g2) Cp At(g1) II A t (/ i i) . Then g1=t>h2. Theorem 5.20 (Specialisation). Let EE = [(ei, £ i ) , . . • , (e„, £ n ) ] be specialisation, and suppose that |= t\ g=C>h |). Then \= tE(g)=>~(h)). Theorem 5.21 (Until). Suppose g1==£>hi and g2=$>h2. Then gi Unt i l g2=[>/ii U n t i l h2. Other rules, like Corollary 5.13 are possible too. To illustrate this, and because the result is useful, a finite version of Corollary 5.13 follows. Lemma 5.22. If g=t>h, then for all t, then Global [(0, t)] c /=^Global [(0, t)] h. Chapter 5. A Compositional Theory for T L 106 Proof. (By induction ont) (1) Global [(0, 0)] 0^>Globa l [(0, 0)] h By hypothesis. (2) Assume as induction hypothesis: Global [(0, t - 1)] 0=^>Global [(0, t - 1)] h (3) Next<<7=ONext*/i Time shift of hypothesis. (4) Global [(0, t)] #=f>Global [(0, t)] h Conjunction of (2) and (3) This concludes the induction. • Corollary 5.23. If g=f>h, then for all s,t,t > s, Global [(s, t)] g = 0 Global [(s, t)] h. Proof. (1) Global [(0, t - s)] fl(=>Global[(0,< - s)] h Lemma 5.22 (2) Global [($, t)] 0=OGlobal[(s,i)] h Time-shift (1). • 5.4 Practical Considerations 5.4.1 Determining the Ordering Relation: is At(g) Q-p A * ( / i ) ? To apply the rules of consequence and transitivity, it is necessary to answer questions such as A*(<7) At(h)l One way of testing this is to compute the sets and perform the comparison directly. However, for practical reasons we often wish to avoid the computation of the sets, and to use syntactic and other semantic information to determine the set ordering. Typically formulas like g and h share common sub-formulas and even some structure, which makes the tests explored in this section practical. Lemma 5.24 is the starting point of these tests, and although very simple, it is important in Chapter 5. A Compositional Theory for TL 107 Name Rule Side condition Identity g=>g Time-shift g=$>h Next*c/=>Next</i t > 0 Conjunction gl=$>h1 g2=$>h2 giAg2=t>hlAh2 Disjunction g1=C>h1 g2=C>h2 9\ V g2=>h1 V h2 Consequence g=>h g1=S>hl A\g) \ZV At(g1), Al(hi) Cp Afc(fc) Transitivity g1=t>h1 g2=$>h2 gi=t>h2 A\g2)Qv A ' ^ O n A 4 ^ ) Specialisation \=tZ(g)=4>Z{h)) E a specialisation. Until g1=^>h1 g2=oh2 gi Untilg 2=t>hi Un t i l h2 Table 5.2: Summary of TL„ Inference Rules practice. One effect of this lemma is that if two formulas are syntactically different but seman-tically equivalent, then they are interchangeable in formulas. Lemma 5.24. If g and h are simple then the question whether At(g) Cp Ak(h) is whether for V(s, q) G Dh with q = t or q — T, 3(s', q) G Dg with s 'Cs . . Proof. This is a restatement of the definition of A*. • Given this starting point, the ordering relation can be determined by examining the structure of formulas and applying the following lemmas. Chapter 5. A Compositional Theory for TL 108 • Lemma 5.25. If g = g i V g2, then A\g) QP A * ^ ) . Proof. (1) Let 8 G A* (01) (2) A'(flf) = At(gi) U At(g2) By definition of A* (3) 6 e A\g) Set theory (4) 8Q8 Reflexivity of partial order (5) A*(o) C p A t(0i) Definition of C p Corollary 5.26. For e G cT, A k ( e ^ o) C p Ak(0). Proo/! Straight from the definition of implication. • Lemma 5.27. If g = g\ Ag2, then A*(01) C p A'fa). Proof. (1) Le t5GA t (0) (2) G A'(oi), <J2 G A fc(02) 9 <J = U 82 Definition of A k . (3) 8^8 Definition of join. (4) A t(0 1) C p A'(flf) Definition of C p • Lemma 5.28. Suppose Ak(g) C p Al(h): then Vi > 0, A^Next^) C p At(Hextih). Proof. By induction on i. The base case of i = 0 follows directly from the assumption. Suppose A'tNext'flr) C p A^Next'Ti). Chapter 5. A Compositional Theory for TL 109 Then shift{At(llGxtig)) Qp shiftA*(Next*h))1. Thus A ^ N e x t ^ 1 ^ ) Qv A*(Next(i+1)fc). • Lemma 5.29. Suppose A'(flri) Qp A*(/ii) and A k(0 2) Qv A*(fc2). Then A * ^ Untilg 2) QP A k ( / i i Un t i l h2). Proof. (1) \/i> 0, A*(Next'sfi) QP Afc(Next*'Jii) By Lemma 5.28 (2) V« > 0, A ^ N e x t ^ ) Qv A^Next*'^) By Lemma 5.28 (3) A*(oi U n t i l e ) = U^ o (A t (Next°0 1 ) II . . . II A ^ N e x t ^ " 1 ^ ) II A4(Next*o2)) By definition (4) Qv U°l0 (A'(Next%) n . . . H A^Next**'-1)/*!) II A*(Next*/i2)) From(l) and (2) (5) = A*(^i Un t i l h2) By definition • Lemma 5.30. For all i, A*(Next80) Qp A*(Globalg). Proof. Let 8 £ A^Globalflf). (1) t < Sat(8, Global g) Lemma A.6 (2) = ->Sat(5, Exists-i^r) Definition of Global (3) = ->Sat(8, t Un t i l ->g) Definition of Exists (4) = - i V ^ 0 (Sat(8>i,->Nextlg)) Definition of satisfaction (5) = - i V-^ 0 -i(5af(5>i,NextJ0)) Definition of satisfaction (6) = A£ oSaf(£>;,Next J0) De Morgan's law (7) Vi , t -< Sat(8>i,Nextlg) Definition of conjunction 1Recall that shift(sos1s2 • • •) = XsoSis2 • • • Chapter 5. A Compositional Theory for TL 110 (8) V i , 36' e A4(Next'y) BS'QS Lemma 4.3 (9) V i , A^Next'flf) C p A^Globalg) Definition of C p • Similar rules can be developed for A f ; the two sets of rules are tied together by the definition of the satisfaction of negation. These tests seem very simple and obvious, but in practice they allow the development of efficient algorithms to test whether At(g) C p At(h). 5.4.2 Restriction to T L n The restrictions on the logic TL„ make it much easier reason about. Recall that the basis of the logic is the set of predicates Gn • In practice, many T L n formulas are of the form r . si A Next2( A [riij] = e t J) i=0 j=0 ' ' where the e;j 6 £• Given flr = Ar:oNexf(A;Lo[ny = <,) h = A^0Nextfc(A^[nw] = e M ) , from Lemma 5.27 it follows that to determine whether At(g) C p A t ( A ) , we need to check whether V i , j ; 3/ 9 n'itj = ra,-,/ A e\3 = e,v. This can largely be done syntactically. Depending on the representation used, testing whether eiti — e\j may either be done syntactically or using other semantic information. Particularly when the level of abstraction is raised, it is often the case that other semantic information must be used. Of course, there are important cases where formulas are not of this form, and we need to have other ways of reasoning about them. A more general and typical case is verifying an assertion of the form \ g=S>h ty, where g is an arbitrary TL„ formulas and h = NextJ( A f = 0 ( [ « i ] = &i) )• Chapter 5. A Compositional Theory for TL 111 Definition 5.4. Strict dependence: Informally, g £ TL„ is strictly dependent on the state components fl = { r i , . . . , r;} at time t if g being true at time t implies that the components n , . . . , rj have defined values, and g is not dependent on any other state components. The formal condition for strict dependence is: g is strictly dependent on the state components fl if In practice, strict dependence can often be checked syntactically. For example [B] = e where e £ £ is strictly dependent on B. This comes from the property of exclusive-or — if a © b £ B, where exclusive-or, a © b, is defined as aA(-<b) V (->a) A 6 — then a, 6 £ B). Moreover, strict dependence can be checked relatively efficiently (as will be seen later). Theorem 5.31 (Generalised Transitivity). Let Ai be a trajectory formula such that r = TAI £ 7£T(^)> and let hi = Next*/i be a TL„ formula strictly dependent on state components {rx,... , n } at time t where h contains no tem-poral operators. Let A2 = Next*(Aj= 1 [rj] = VJ) where the Vj £ V. Suppose |= (\ Ai=>hi |> and |= ^ A2=>h2 |>. Then, (1) There is a substitution £ such that |= t\ Ai=^>£(h2)and (2) h(TtA*) = t. Proof. (1) For i = 1,... , /, 3e{ £ £ 3 e,- = r^ 1 [rt] |= ^ A I = I > / H ,^ strict dependence of hi. (2) Let C = Next* A $ = 1 [r,-] = e; V£ £ A\g) : Vr £ fl, U C «J([r]; Vs g fl, <$,[s] = U. (3) By construction of C. (4) Conjunction. (6) (5) For v £ {vi,... , vi} let £(UJ) = ej For u ^  {vi,... , vi} let £(u) = u |= (| A2=t>/i2 [> Given. Chapter 5. A Compositional Theory for T L 112 (7) (9) (10) (8) h ^ i A e ( A 2 ) = > ^ 2 ) ^ C = t(A2) By construction. Rule of consequence. From (4), (8) by transitivity. Substitution (Lemma A. 18). This concludes the proof of (1) (11) 'h\TtAl)=t \= (| Ai =>Next'/i [> and TM G nT(t) This concludes the proof of (2) • Although the proof of this theorem is relatively complex, the theorem itself is not, and very importantly many important side-conditions can be checked automatically. The seeming com-plexity of the theorem comes from having to relate Ai to A2. But, this turns out to be the virtue of the theorem. The difficulty with trying to use transitivity between two results |= t\ Ax => hi \ and |= \ A2=c>h2) is to find the appropriate specialisation for the latter result. This theorem provides a method for doing this: the first part of the theorem says that a specialisation exists, and the second part helps find it. The example below illustrates the use of the theorem. Example 5.9. Figure 5.2 shows two cascaded carry-save adders (CSAs). There are four inputs to the entire circuit, and two outputs. Three of the inputs get fed into one of the CSAs; the other CSA gets the fourth of the inputs and the two outputs of the first CSA. Assuming each CSA takes one unit of time to compute its results, if four values get entered at J, K, L and Af, two units of time later, the sum of these four values will be the same as the sum on nodes P and Q. Ai = ([J] = j) A([K] = k) A{[L] = I) A(Next [Af] = m) hi = Next ([N] + [0] = j + k + I A [M] — m) A2 = Next ([Af] = m) A([N] = n) A([0] = o) h2 = Next2([P] + [Q] = m + n + o) Chapter 5. A Compositional Theory for TL 113 Then |= 4 A i = r > / i i [> |= 4 A2=oh2}. These two results can be proved using trajectory evaluation. The process of performing tra-jectory evaluation also checks that TAI 6 TZr{d(hi)). A2 and hi are of the correct form for Theorem 5.31. Furthermore the strict dependence of hi on components M, N and O can be checked syntactically. By the theorem we have that there is a specialisation £ such that \=$Al=>Kext2([P] + [Q] = Z(m + n + o))) _ and ([AT] + [O] = j + fc + I A [M] = m)(TAI) = t which means that (^[N] + ^[0]=j + k + l)=t. But, by the structure of hi we know that r2M [N] = f (n) and [0] = £(o) and [M] = £ ( m ) and so, as by the properties of substitution £(x + y) = £ ( x ) + £(y), (£(n + o) = j + fc + / A £(m) = m) = t. This result has given us sufficient information about £. Thus, | = (J A i = T > N e x t 2 ( [ P ] + [Q] = j + fc + / + m ) }. 5.5 Summary This chapter presented a compositional theory for TL; this theory is very important in overcom-ing the computational bottlenecks of automatic model checking. The focus of the compositional theory is property composition, which is particularly suitable for STE-based model checking. The general compositional theory for TL was presented, followed by additional rules for T L n . Chapter 5. A Compositional Theory for TL 114 M -L K J o N Q P Figure 5.2: Two Cascaded Carry-Save Adders Section 5.4 discussed some issues that are important in the practical implementation of the the-ory. Chapter 6 shows how the compositional theory can be implemented in a practical tool. Chapter 6 Developing a Practical Tool This chapter discusses how to put the ideas presented in the previous chapters into practice. A number of prototype verification systems using these ideas have been implemented to test how effectively the verification methodology can be used. Although these prototypes have been used to verify substantial circuits, they are prototypes, and the purpose of the chapter is to show how a practical verification system using TL can be developed, rather than to describe a particular system. Section 6.1 discusses the Voss system, developed to support the restricted form of trajectory evaluation. Voss is important because the algorithms that it implements form the core of the prototype verification systems. This section also discusses the important issues of how boolean expressions, sets of interpretations, and sets of states are represented. Section 6.2 examines higher-level representational issues, in particular efficient ways of representing TL formulas so that they can be efficiently stored and manipulated. It is important that appropriate representa-tional schemes be used since different methods are appropriate at different stages. By convert-ing (automatically) from one scheme to another, the strengths of the different methods can be combined. Section 6.3 shows how trajectory evaluation and theorem proving can be combined into one, integrated system. The motivation for this is to provide the user with a tool which pro-vides the appropriate proof methods at the right level of abstraction — model checking at the low level, theorem proving at the high level where human insight is most productively used. The theorem prover component is the implementation of the compositional theory, which is critical 115 Chapter 6. Developing a Practical Tool 116 for the practicality of the approach. One of the key issues here is how to provide as much assis-tance to the human verifier as possible. The final section, Section 6.4 extends existing trajectory evaluation algorithms so they can be used to support a richer logic. 6.1 The Voss System In order to verify realistic systems, any theory of verification needs a good tool to support it. Seger developed the Voss system [115] as a formal verification system (primarily for hardware verification) that uses symbolic trajectory evaluation extensively. There are two core parts of Voss. The user interface to Voss is through the functional lan-guage FL, a lazy, strongly typed language, which can be considered a dialect of ML [107]. One of the key features of FL is that BDDs are built into the language, as boolean objects are by de-fault represented by BDDs. Since BDDs are an efficient method of representing boolean func-tions, data structures based on BDDs, such as bit vectors representing integers, are conveniently and efficiently manipulated. Of course, as previously discussed, the limitations of BDDs mean that there are limitations on what can be represented and manipulated efficiently; how these limitations are overcome is an important topic of this chapter. The second component of Voss is a symbolic simulation engine with comprehensive delay modelling capabilities. This simulation engine, which is invoked by an FL command, provides the underlying trajectory evaluation mechanism for trajectory formulas. Trajectory formulas are converted into an internal representations (the 'quintuple lists') and passed to the simulation engine; these quintuple lists essentially are representations of the defin-ing sequences of the antecedent and consequent formulas comprising the assertions. The an-tecedent formula is used to initialise the circuit model, and the simulation engine then com-putes the defining trajectory for the antecedent. As this evaluation proceeds, Voss will flag any antecedent failures, viz. a Z appearing in the antecedent, and compares the defining trajectory Chapter 6. Developing a Practical Tool 117 with the defining sequence of the consequent. Two types of errors are reported if the comparison fails: a weak consequent failure occurs if Us appearing in the defining trajectory are the cause of the failure1; a strong consequent failure is reported if the defining trajectory of the antecedent is not commensurable with the defining sequence of the consequent.2 Using the terminology of TL and Q, a weak failure corresponds to the satisfaction relation returning J_, a strong failure corresponds to a f . Circuit models can be described in a number of formats. Interacting through FL, the user sees models as abstract data types (ADTs) of type fsm. FL provides a library (called the EXE library) which allows the user to construct gate level descriptions of circuits. Once a circuit is constructed as an EXE object, the model can be converted into an fsm model. There are also tools provided for converting other formats (both gate level and switch level models) into fsm objects; among others, VHDL and SILOS circuit descriptions can be accepted. Representing Sets of Interpretations Since STE-based verification computes the sets of interpretations of variables for which a given relation holds, efficient methods for representing and manipulating these sets is important. Voss represents a set of interpretations by a boolean expression (i.e., by a BDD). If ip is a boolean expression, then ip represents the set {<$> £ $ : </>(</?) = t}. This representation relies on the power of BDDs, so although usually a good method, it breaks down sometimes. One advantage of this representation is that set manipulation can easily be accomplished as boolean operations. If ipi represents the set of mappings $1 and tp2 the set of mappings $ 2 , then ipy V ip2 is the representation of $! U $ 2 , ^1 A (/?2 the representation of $1 n $ 2 , is the representation of $ \ $ i , and set containment can be tested by computing for logical implication. 1This happens if the denning trajectory of the antecedent is less than the defining sequence of the consequent; the verification might succeed with a stronger antecedent. 2 A stronger antecedent will only make things worse. Chapter 6. Developing a Practical Tool 118 A more general point about representing interpretations also needs to be made. Suppose that 6 G £ is a boolean expression (and so represented as a BDD). Let u 1 ? . . . , vm be the variables appearing in b? To ask whether there is an interpretation 0 such that 0(6) = t is the same as asking whether 3ui , . . . ,vm.b (are there boolean values that can replace the variables so that the expression evaluates to true?). Since existential quantification is a standard BDD operation, this can be computed in FL through the construction and manipulation of BDDs. Representing Sets of States Voss manipulates and analyses circuit models, viz. models where the state space is naturally rep-resented by Cn for some n. A state in Cn is thus a vector (c i , . . . , cn), where each c,- G C. Voss uses a variant of the dual-rail encoding system discussed in Section 3.2 for representing ele-ments of C \ which means that each state in Voss is represented by a vert^ . . . , (a n,6 n)), where the a;, 6; G B. The use of boolean variables allows one symbolic state to represent a large number of states. The vector s = ((ai,&i),... ,(a n,6 n)) (where the a,-, 6,- G £) represents the set of states {0(s) : <f> G $} where = ((0(a1), 0(60),... , (#a B ) ,#M)>-Note that the a8 and 6, need not contain any variables. This idea can be extended so that sets of sequences can be represented by symbolic sequences. This type of representation is known as a parametric representation. The alternative repre-sentation is the characteristic representation. (For discussion of these representations, see [43, 3This is a simple syntactic test; since quantification does not exist in £, we do not have to ask whether variables are free or bound. Chapter 6. Developing a Practical Tool 119 87].) x '• Cn & represents the set 1>:x(*)=t}. Note the s imi la r i ty to the way i n w h i c h interpretations are represented. T h i s indicates that x can be represented as a B D D — the mechanics o f this are presented now. L e t s = ( s i , . . . , sn) be a symbo l i c state representing the set o f states S, and let vu . . . ,vm be the variables appearing i n s. Le t r = ( r l 5 . . . , rn), where each r; is a pair o f boolean variables ( r 8 ] i , r i ) 2 ) not appearing i n { u i , . . . ,vm}. Thus x(r) c a n D e represented a boolean expression ( B D D ) conta ining the r; j variables only. To determine whether a part icular state is i n the set, the r ; are instantiated i n x(r)j the value obtained is t i f and on ly i f the state is the set. The advantage o f the characteristic representation is that it is convenient to per form un ion and intersection operations on sets o f states. Moreove r , as each set is represented by one B D D , set representations are canonica l , w h i c h is extremely useful. Howeve r , this mono l i th i c repre-sentation o f sets o f states can be very expensive. The p r imary advantage o f the parametric representation is that it is very compact , n inde-pendent B D D s represent a set o f states, w h i c h increases the size o f the state space that can be manipulated. Moreove r , this representation is par t icular ly suitable for the s y m b o l i c s imula t ion o f the state space, and for the computat ion o f defining sequences. It is the representation method on w h i c h S T E is based. Chapter 6. Developing a Practical Tool 120 6.2 Data Representation Although exploiting the power of BDDs to implement the underlying trajectory evaluation ef-ficiently is essential, there needs to be complementary ways of representing and manipulating data. One way of doing this is to represent TL formulas and associated data structures symboli-cally. This is best explained by the following example. Suppose the circuit being verified is an integer adder. Formally, the circuit model represents the integers as bit vectors of appropriate size, and addition of integers is formally represented as bit vector manipulation. The TL for-mulas used to specify correctness will formally describe the behaviour of the circuit at the bit level. In our prototype tools, integer types like this are represented and manipulated in the follow-ing way. • An abstract data type representing integers is declared. See Figure 6.1 which gives an ex-ample declaration: integers are constants, variables, or the addition, subtraction, multipli-cation, division, or exponentiation of two integers; integer predicates are the comparison of two integers. • A routine which converts integer objects into bit vectors, and integer predicates into an equivalent predicate over bit vectors is written. For convenience, this routine is referred to as bv. Typically, the bit vectors are finite and can be represented in standard ways (e.g. twos complement). However, it is also possible to have representations of infinite bit vec-tors (the lazy semantics of FL is useful here). • A set of bit vector operations giving the formal semantics of the objects is implemented. Addition, for example, is modelled by operations on two bit vectors. • A set of ADT operations corresponding to the vector operations is implemented. This means that the FL program can manipulate integer-related objects without converting them Chapter 6. Developing a Practical Tool 121 into the associated bit vectors. l e t t y p e N = // N a t u r a l number e x p r e s s i o n s Nvar s t r i n g | Nconst i n t j Nadd N N j Nsub N N j Nmult N N j Ndiv N N | Npow N N Figure 6.1: An FL Data Type Representing Integers Although the formal semantics of integer objects is given by bit vectors and the operations on bit vectors, the higher-level representation is useful for two reasons. First, it has the effect of raising the level of abstraction, which makes the verification task for the user easier since it en-ables the user to deal with higher-level, composite objects. Second, and more importantly, it has significant performance advantages; BDDs can be used where appropriate and other methods where BDDs fail. This situation is depicted in Figure 6.2. The FL program stores the object d; by applying the conversion routine, bv, the bit vector which represents d can be computed. Applying the operation fadt to d is the same as applying the operation fud to d^d, which is illustrated by the commutative diagram in Figure 6.2. d M d! bv bv dbdd d'hdd Figure 6.2: Data Representation Thus, even if d or d! cannot be represented efficiently as BDDs, there is an effective way Chapter 6. Developing a Practical Tool 122 of representing and manipulating them through the abstract data type representation. As will be seen, using this method of representation is an effective way of going beyond the limits of BDDs. It is critical that both the conversion routine bv and the ADT operations are implemented correctly so that the diagram in Figure 6.2 is commutative. In the HOL-Voss system correctness was formally proved [90]. Although I did not go through this exercise in the prototype imple-mentations, this is a critical step in the production of a tool. However, one should note that there may be a trade-off between degree of rigour and performance. For example, in an interesting paper showing how BDDs can be implemented as a HOL derived rule [75], Harrison reports that a HOL implementation of BDDs as being fifty times slower than a Standard ML imple-mentation. Although this work is cited as being 'superior to any existing tautology-checkers implemented in HOL', Harrison points out that other approaches to ensuring correctness can be adopted. The ADT routines that implement the operations on the data objects constitute domain knowl-edge, representing the verification system's higher level semantic knowledge of what bit-level operations mean. There are different ways in which domain knowledge can be provided. One method is to have a canonical representation for data objects, or to have a set of decision proce-dures for the type (for example, to tell whether two syntactically different objects have the same semantics — whether if they are both converted into BDD structures, the structures will be the same). There is a limit to how far this can go; for example, with the integer representations used, no canonical forms exist, and decision procedures have limitations and can be expensive. However, as will be seen this can be effective and, since it is automatically implemented, user-friendly, reducing the load of the human verifier. Another method — which can be implemented as an alternative or as a complement to the decision procedure method — is to provide an interface to an external source of knowledge. One Chapter 6. Developing a Practical Tool 123 likely such source is a trusted theorem prover such as HOL, allowing the verifier to prove results in HOL, and then to import these results into the verification system. Although approach is very flexible and very powerful, it increases the level of expertise needed by the verifier considerably. 6.3 Combining STE and Theorem Proving The practical importance of the compositional theory presented in Chapter 5 is that it provides a powerful way of combining STE and theorem proving. The inference rules of the composi-tional theory are implemented as proof rules of a theorem prover. The combination of theorem proving and STE creates a tool which provides the appropriate proof mechanism at the appro-priate level. For a human to reason at the individual gate level, while conceptually simple and straightforward, is often too onerous and tedious. A single trajectory evaluation can often deal with the behaviour of hundreds or thousands of gates, depending on the application. The the-orem prover allows the human verifier to use insight into the problem to combine lower-level results using the compositional theory. By using the representation method discussed above, and the compositional theory, the computational bottle-neck of automatic model checking al-gorithms can be widened considerably. The prototype verification systems built implement proof systems based on STE and the compositional theory for STE. The object of verification is to prove properties of the form |= ^ g=C>h The proof system does this by using STE as a primitive rule for proving assertions; the compositional theory is implemented as set of proof rules that can be used to infer other results. From a practical point of view, the Voss system provides a good basis for this. The user in-teracts with the proof system using FL. By using the appropriate FL library routines, trajectory evaluation and the compositional theory can be used. There are different ways in which this could be done and packaged. Chapter 6. Developing a Practical Tool 124 For example, in the first prototype tool, the verification library consisted of the following rules, implemented as FL functions, each function either invoking trajectory evaluation or one of the compositional inference rules. • V O S S : This performs trajectory evaluation. • I D E N T I T Y implements the identity rule. • C O N J U N C T implements conjunction. • SHIFT allows assertions to be time-shifted. • P R E S T R O N G implements part of the rule of consequence, allowing the antecedent of an assertion to be strengthened. P O S T W E A K implements the other part of the rule of consequence, allowing the conse-quent of an assertion to be weakened. Both of these rules use domain knowledge to check the correctness of the use of the rule. • T R A N S takes two assertions, checks whether transitivity can be applied between the first and second (i.e., the correct relationship holds between the two assertions), and if it can be, applies the rule of transitivity. • S P E C I A L allows the user to specialise an assertion. • S P T R A N S takes two assertions, 7\ and T2, and attempts to find a specialisation H such that transitivity can be applied between Ti and H(T2). The heuristic used to find the spe-cialisation (discussed later) does not compromise the safety of the verification since if it fails and no specialisation is found, no result is deduced. Moreover, if a putative special-isation is found, the correctness of the specialisation is checked by testing whether the conditions for transitivity to apply do hold once the specialisation is applied. • A U T O T I M E takes two assertions, Xi and T 2, and attempts to find an appropriate time-shift t for one of the assertions so that transitivity can be applied. Recall that the time-shift rule Chapter 6. Developing a Practical Tool 125 only applies if t > 0. If t < 0 is found, then the verification system shifts T\ forward by —t time steps and attempts to apply transitivity between the shifted Ti and T 2; if t > 0, then T 2 is shifted forward by t time units, and then the verification system attempts to apply transitivity between Ti and the shifted T 2. • A L I G N S U B combines the ideas of the above two rules. Given two assertions it attempts to find a time-shift and specialisation so that when both are applied, transitivity can be used. • P R E T E N D allows a desired result to be assumed without proof. When deciding whether an overall proof structure is correct, it may be useful to assume some of the sub-results and then see whether combining the sub-results will obtain the overall goal, before putting effort into proving the sub-results. Furthermore, in a long proof built up over some pe-riod of time it may be desirable at different stages in proof development to replace some calls of V O S S with P R E T E N D . Having proved a property of the circuit using STE it may take too much time in proof development to always perform all STE verifications when the proof script is run. Although at the end, the entire verification script should be run completely, it is not necessary to always perform all trajectory evaluations in proof de-velopment. An important part of implementing this verification methodology was to integrate the trajec-tory evaluation and theorem proving aspects into one tool. Not only does this make the method-ology easier for the user (since the quirks of only one system have to be learned and only one conceptual framework and set of notations have to be learned), the practical soundness of the system is maintained (the user does not have to translate from one formalism to another). Moreover, using F L as the interface is very beneficial. Although this requires the user to be familiar with FL, once learned it provides a flexible and powerful proof tool. Using the basic verification library provided by the tool, the user can package the routines in different ways. Chapter 6. Developing a Practical Tool 126 The proof is written as an FL program that invokes the proof rules appropriately. This allows the proof to be built up in parts and combined. The use of a fully programmable proof script language — FL — removes much drudgery and tedium. A critical factor in trajectory evaluation, affecting both the performance and the automatic nature of STE, is the choice of the BDD variable orderings used in the trajectory evaluation. A poor choice of variable ordering can make trajectory evaluation impossible or slow [44]. A l -though the use of dynamic variable ordering techniques (one of which has been implemented in Voss) ameliorates the situation, the compositional method means that dynamic variable ordering is not a panacea. In many cases, there is simply not one variable ordering that can be used. The strength of the compositional theory is that it allows different variable orderings to be used for different trajectory evaluations. If different variable orderings must be used for each of many trajectory evaluations (for some proofs hundreds of trajectory evaluations could be done), using dynamic variable ordering alone might significantly degrade performance. On other hand, in many applications, good heuristics exist for choosing variable ordering automatically based on the structure of the TL formulas. One of the advantages of representing data at a high level (an integer ADT) is that knowledge of the type and operations on the type can be used to determine appropriate variable orderings. A useful technique is to provide as part of the FL library implementing a particular ADT, a function which takes an expression of the type and produces a 'good' variable ordering. This particular example illustrates the advantages of incorporating heuristics into a system to aid the user. Other examples of heuristics which proved to be useful are the heuristics which takes two assertions and try to find appropriate specialisations and time-shifts so that transitivity can be used between the two assertions. The algorithms that implement these heuristics are straightforward. Although there are a number of possible heuristics and algorithms that could be used, experience showed that simple implementations are quite flexible. Chapter 6. Developing a Practical Tool 127 Finding a time-shift: This algorithm takes the consequent of one assertion and the antecedent of another and determines whether if one of the formulas is time-shifted, the two formulas are related to each other (in that their defining sequences are ordered by the information ordering). String matching is the core of the algorithm, and although the extremely large 'alphabet' re-stricts the sophistication of the string matching algorithms that can be applied, in practice the simple structure of the formulas means that simple string matching algorithms are quite ade-quate. Finding a specialisation: This heuristic performs a restricted unification between two for-mulas to discover whether if one of the formulas is specialised, it is implied by the other (in that the defining sequence of one is ordered with respect to that of the specialised formula). A simi-lar approach is used in implementing Generalised Transitivity (Theorem 5.31). Since semantic information must be used as well as syntactic (two syntactically different expressions may be semantically equivalent), the effectiveness of the algorithm is limited by the power of the do-main knowledge incorporated into the tool. However, the simple structure of most antecedents means that a simple heuristic works well. It is also possible to incorporate both heuristics into one heuristic so that candidate time-shifts and specialisations are sought at the same time. To implement this completely is much more difficult since there may be a number of different time-shift and specialisation combina-tions that can be applied. It can also be computationally more expensive, since for each possible time-shift it may be necessary to use different domain knowledge. However, in practice, since formulas tend not be very large, this can be useful and practical. Here, the representation of data at the ADT level is very important practically since high-level information can be used to find whether time-shifts and specialisations will be appropriate; if a lower level representation were used, much more work would need to be done. In all cases, once a transformation is found, it is automatically applied and checked; this also Chapter 6. Developing a Practical Tool 128 shows that heuristics can be incorporated without compromising the soundness of the proof sys-tem. The core inference rules are always used for deducing results; the heuristics provided by the user or the verification system as FL functions are there to automate the proof (at least par-tially) in determining how the inference rules are to be used, and at no stage is the safety of the result compromised. Moreover, if a transformation cannot be found, suitable error messages can be printed indicating why such a transformation could not be found, helping the user to determine whether the attempted use of the rule was wrong (e.g. because the desired result is false), whether more information is needed (e.g. perhaps the rule pf consequence must be ap-plied to one of the assertions first), or more domain knowledge must be provided by the user. 6.4 Extending Trajectory Evaluation Algorithms The core of the practical tool proposed here is the ability to perform trajectory evaluation to check assertions of the form |= t\ g=z>h where g and h are TL formulas (actually T L n for-mulas since we are dealing with circuit models). The basis of these algorithms is the trajectory evaluation facility of Voss, which can compute results of the form j= (\ A=^>C ), where A and C are trajectory formulas. There is a trade-off between how efficiently trajectory evaluation can be done, and the class of assertions that can be checked. This section first describes and justifies the restrictions placed on assertions, and then outlines three possible algorithms that can be used to extend Voss's STE facility. (The advantages and disadvantages of these algorithms are discussed in Section 7.6 after the presentation of experimental evidence.) 6.4.1 Restrictions What are the problems in determining whether (= \ g=z>h ^? Chapter 6. Developing a Practical Tool 129 First, STE computes the =g> relation, rather than the =D> relation. However, as shown in Section 4.4.3, if only the realisable fragment of T L n is used, there is an efficient way to deduce the = o relation from the =^> relation. The limitation to the realisable fragment means that users cannot explicitly check whether a component of the circuit takes on an overconstrained value. But, the nature of the circuit model means that this is checked for implicitly in any tra-jectory evaluation. The underlying trajectory evaluation engine can easily check for antecedent failures by testing whether a Z appears in the defining trajectory of the antecedent. Thus, I argue that this limitation is not a severe restriction, and worth the price. Second, allowing a general TL„ formula in the antecedent can be very costly since it may require numerous trajectory evaluations to be done. Recall from Chapter 4 that computing the defining sequence sets of a disjunction is done by taking the union of the defining sequence sets of the disjuncts. Thus, the cardinality of the defining sequence sets is proportional to the number of disjuncts. At first sight, it may seem that in practice that the structure of formulas is such that this will not be a real problem. For circuit verification, how many formulas have more than a dozen disjuncts (a number of sequences that could probably easily be dealt with)? However, this is misleading since while disjuncts may not appear explicitly in a formula, they may actually be there, particularly when dealing with non-boolean data types. For example, a predicate on an integer data type such as [I] + [ J] = k -f / + m can translate into a very large number of disjuncts, even for moderate sized bit-widths. Thus, for performance reasons, one restriction placed on formulas is that trajectory evalu-ation is only done for assertions that have trajectory formulas as antecedents, i.e. for formulas g such that the cardinality of A*(<7) is one. Besides the performance justification, experience with STE verification has shown that the main need for enriching the logic is to enrich the con-sequents rather than the antecedents. Moreover, the use of the compositional theory allows the Chapter 6. Developing a Practical Tool 130 enriching of antecedents indirectly (for example through the disjunction, until, and general tran-sitivity rules). Nevertheless, even though experience so far with STE has not shown this to be a significant restriction, this is an undesirable restriction, and more work needs to be done here. The final restriction is made with respect to the infinite operators such as Global. Since the state space being modelled is finite, all trajectories must have repeated states; thus, in prin-ciple, it is only necessary to investigate a prefix of a trajectory. However, this requires knowing when a state in a trajectory has been repeated. Since, in the tool, symbolic sequences represent a number of sequences or trajectories, given a symbolic sequence we have to know for which element in the sequence it is the case that for all interpretations of variables there have been re-peated states. The parametric representation of state is unsuitable for this computation, which requires the characteristic representation to be used. However, if the characteristic representa-tion is to be used, then the advantages of STE over other model checking approaches is reduced. If infinite formulas must be tested, other approaches may well be more suitable. Moreover, for hardware verification, infinite operators are less important than in more general situations since timing becomes more critical. We are not interested that after a given stimulus, output happens some time in the future; we want to know that output happens within x ns of input. Trajectory evaluation's good model of time and its ability to support verifications where precise timing is important is one of its great advantages. Finally, in the same way the compositional rules can be used to enrich the antecedent, they can be used to allow the infinite operators to be expressed usefully (the until rule and its corollaries are good examples here). In summary, the STE-based algorithms proposed here check assertions of the form |= \ A = o h ), where 1. A and h are in the realisable fragment of TL„; 2. A is a trajectory formula; and 3. h does not contain any infinite operators. Chapter 6. Developing a Practical Tool 131 Through the use of the compositional rules, the limitations of (2) and (3) can be partially over-come. The rest of this chapter examines how Voss's ability to check formulas |= ^ A = g > C } can be used effectively. Three algorithms are presented. 6.4.2 Direct Method If A and C are trajectory formulas, the standard use of STE for model checking trajectory as-sertion of the form t\ A=>C } is straight-forward since the cardinality of At(A) (and hence Tk(A)) and Ak(C) are one. 8A and 8C are constructed, and rA is computed from 8A. The last part of the verification is to check whether 8C QTA. Where we choose the consequent to be a general formula g of TL„, we need to consider the entire set At(g). However, the basic idea is the same: construct 8A and compute rA, and then check whether \/8 e At(g), 8 C TA. HOW this is done is sketched in the pseudo-code below: Compute(g,j)= case g of [ t ] : r/[i] = H gohgi : Compute^, j) A Compute^, j) Next g : Compute^, j + -ic/ : - Compute(#,j) t : t f : f This algorithm is simple and straight-forward, although care must be taken in implementa-tions to ensure efficiency, particularly when dealing with ADTs such as vectors and integers, and derived operators such as the bounded versions of Global. First, only necessary informa-tion must be extracted from TA. Second, a very important optimisation in the Voss tool is event scheduling — usually from one time step to the next only a few state holding elements change their values. By detecting that components are stable for long periods of time, much work can be saved. Any modifications to the STE algorithm must not interfere with this. Chapter 6. Developing a Practical Tool 132 The way this was implemented in the prototype tool was to (i) determine from the conse-quent which state components are important, (ii) use Voss's trace facility to obtain the values of those components at relevant times, and (iii) compute whether the necessary relationship holds. All of this can be done in FL on top of Voss, obviating the need to make any changes to the un-derlying trajectory evaluation algorithm. Not only did this choice of implementation make de-veloping the prototype much easier, but more fundamentally it means that the event scheduling capacity of Voss is not impaired in any way. As a side note, modifications to this approach to deal with the infinite operators is, in princi-ple, straightforward. At each step in the trajectory evaluation the set of reachable states is added to. Once a fix-point is reached, the trajectory evaluation can stop. The use of partial information might improve the performance of the modification (in some cases — but not all — once a state s has been explored, we need not visit any state above s in the information ordering). Provided we are prepared to pay the cost of computing the characteristic representation of the state space, this is feasible, although care must be taken not to conflict with the event scheduling feature of Voss. 6.4.3 Using Testing Machines An alternative way of extending STE is through the use of testing machines. The goal is to determine, given a model M, whether \=M \ A=^>g The idea behind testing machines is to answer this question by constructing a model M' and a trajectory formula Cg such that \=M, § A=^>Cg ) if and only if \=M \ A=^>g [>. An analogous approach is the one adopted by Clarke et al. in extending their CTL model checking tool by using tableaux so that LTL for-mulas can be checked [38]. Other verification techniques also use this idea of using 'satellite' or observer processes to capture properties of systems [9]. As an example, using only trajectory formulas STE can check whether a node always takes Chapter 6. Developing a Practical Tool 133 on a certain value, y, say; it cannot check that a node never takes on the value y since the corre-sponding predicate is not simple and thus the question cannot be phrased as a trajectory formula. However, suppose the circuit were to have added to it comparator circuitry that compares the value on the node to y and sets a new node ./V with H if the node doesn't have the value y and L if it does. To check whether the node takes on the value can now be phrased as a trajectory formula. This section gives a brief outline of this, and detail can be found in Appendix B. As presented, model checking takes a model and a formula and then performs some compu-tation to check whether the model satisfies the formula. The basic motivation behind testing ma-chines is that some of the computation task can be simplified by moving the computation within the model itself. In essence what we do is construct a circuit that performs the model checking, compose this circuit with the existing circuit and then do straightforward model checking on the new circuit. This task is simplified by the close relationship between Q and C. There are thus two steps in the model checking algorithm. The first is to take the formula and to construct the testing machine; the second is to compose the testing machine with the original circuit and to perform model checking. The construction of the testing machine is done recursively based on the structure of the formula. An important part of the algorithm is constructing the testing machines for the basic predicates. For predicates dealing with boolean nodes, the construction is straightforward, es-sentially doing a type conversion. For other types — especially integers — it is somewhat more complex; for example, integer comparator and arithmetic circuits are needed. Given the test-ing machines for the basic predicates, there is a suite of standard ways in which these testing machines are composed, depending on the structure of the formula. One of the complexities is dealing with timing information. For example, in the formula g V h, g and h may be referring to instants in time far apart. This will mean that the testing machines that compute g and h will probably produce their results in time instants far apart, Chapter 6. Developing a Practical Tool 134 which in turn means that some sort of memory may be necessary. If the formula g V h is nested within a temporal operator such as the bounded always operator, it may be necessary to compute the values of g V h for many instants in time, which means that a large number of results may need to be stored temporarily. This will affect the computation and memory costs of model checking. The testing circuitry does not deal with the unbounded operators Global and Exis ts . The method of Section 6.4.2 could deal with these operators by recording the set of states already examined and other information. Testing circuitry could be built that duplicates that. As the state space is finite, we know that at some finite time all states will have been examined, but since the operators are unbounded it cannot be determined a priori at which instant in time all states will have been examined. Thus the scheme for examining the testing circuit at a particular moment fails. It seems that this can only be dealt with by modifying the STE algorithm. 6.4.4 Using Mapping Information Suppose that A and C are trajectory formulas which have the property that no boolean variable in C appears in A. Let $ i = (j A=t>C ). This is the set of assignments of boolean variables to values for which A=f>C. In particular, it describes the relationship between the variables in the antecedent formula and the variables in the consequent formula which must hold for the trajectory evaluation to succeed. C essentially extracts relevant components of TA, SO by making C general enough, enough useful mapping information can be used to make model checking TL„ feasible. Provided enough information is extracted, we can use $ i to determine whether (= t\ A=c>g holds: if for all in-terpretations in $ i , g holds (this is formalised later), and provided some side conditions hold, then so does |= ^ A=$>g ). An example will illustrate the method. Chapter 6. Developing a Practical Tool 135 J K L M N Figure 6.3: A CSA Adder The Carry-Save adder (CSA) shown in Figure 6.3 is used in a number of arithmetic circuits. An n-bit CSA adder consists of n independent single-bit full adders. For simplicity in the ex-ample, we consider a one-bit adder. Suppose that at time 0, the state of the circuit is such that node J has the value j, node K has the value k, and L has the value /. Then at time 1, the state of the circuit should be such that M has the value j © k © / (0 representing exclusive or) and N has the value j A k V j A / V k V /. This is easy enough to verify using a trajectory formula. However, there are verifications in which what we are interested in is not what the particular values of nodes M and N are, but that the sum of the values of nodes J, K and L at time 0 is equal to the sum of the nodes M and N at time 1. (This is exactly what we are interested in when verifying a Wallace-tree multiplier). In STE, this property can not be verified directly. Define the trajectory formulas A and C by: A=(J = j)A(K = k)A(L = I) C = Next (M = m) A(N = n) and then compute $ i = \ A=t>C $ i gives the constraints which must hold for the trajectory evaluation to hold. In particular it gives the constraints relating j , k and / with m and n. Suppose V#> € $i,(f>(g) = t where g = (j; + k +1 = m + n) (assuming here two-bit addition). If this is the case then we know that for each mapping of boolean variables to values for which the STE holds, (j + k + l = m + n). Or putting this in terms of an expression in T L n , that h — Next (j + k + l = [M] + [A7]) holds. Chapter 6. Developing a Practical Tool 136 Essentially g is h where we substitute the variables m and n into h to act as place-holders for the state components M and N. C is a way of extracting appropriate values out of TA- SO , if V</><E $ 1 , </>(#) = t a n d $ i = { A=>C |> then (] A=t>/* |>. One important check needs to be made - we must ensure that the above condition is not satisfied vacuously by $x being empty or only containing very few interpretations. What we want to ensure is that $ i covers all the interesting cases: that for every possible assignment of values to the boolean variables j , k and /, there is an assignment to the boolean variables m and n such that the trajectory evaluation holds. This is formalised now. Definition 6.1. Let U be a set of variables, and cf>: V —> B be an interpretation of variables. The set of exten-sions of 4> with respect to U is Ext(4>, U) = {tp e $ : Vu € V - U, 4>(v) = tp(v)} • The condition that $ i is non-trivial can be expressed as: for every interpretation cj> € $, there is an interpretation ip e Ext(4>,v(A)) where v(A) is the set of variables in A, andtp £ $x. In other words for every interpretation of variables of A there is an extension of that interpre-tation to include variables in C, such that the extension is an element of $ i . Note: If h! is a T L n formula not containing temporal operators, then by the remarks preced-ing Theorem 3.5, then we can consider h' as a predicate from Cn to Q. For convenience, if h' is strictly dependent on nodes { n i , . . . , n r}, then we write h\xi,... , xr). Theorem 6.1. Let A be a trajectory formula, and h = Next h' be a T L n formula such that h! contains no temporal operators. Let h' be strictly dependent on N = { n i , . . . , nr}. Let C = Next (r\\[nj) = WJ) where the WJ are distinct and disjoint from the variables in A; let W = {wi,... , wr}. Suppose: 1. $! = ^ A^>C\; Chapter 6. Developing a Practical Tool 137 2. V0G $uil>(h'(wu... ,wr)) =t; 3. V0 G $ , 30 G £ ^ ( 0 , W), and 0 G $ i ; and 4. V ^ € * i ; » = 0 , l ; j = l , . . . , n : T / ( A ) [ J ] ^ Z. Then (= <] A=t>/i Proof. (1) V0 G $ , 30 G Ext(<j>, W), 0 G $ i Hyp. 3. (2) V0 G <&, 30 G Ext(4, W),0(A)=>0(C) (l),Hyp. 1. (3) V0 G 30 G ^ ( 0 , W) , <$f(C) • rf{A) (2), Theorem 4.5. (4) V0 G $ , 3-0 G £ ,arf(0,PV),(jf ( C )[A:]Erf ( > i )[i],fc= 1,... ,n (3) , structure of C N . (5) V0 G $ , 30 G £ & ( < £ , W), 0 K ) • r f ^ k ] , j = 1,... , r (4) , structure of C. (6) V0G $,30 G Ext(4,W)MWj) = T«A)[nA,j = l , . . . , r (5) , Hyp. 4. (7) / i ' ( 0 ( u ; i ) , . . . ,V>(uv)) = t Hyp. 2, definition of 0 ( / > ' ) • (8) V0 G $ , 30 G £ t o ( 0 , VK), ^ ' ( ^ [ m ] , . . . , r ^ K D = t (6) , (7). (9) V0 G $ , fc'(r*(yi)M,... , r f (A)[nr]) = t (8), Wj not in A. (10) V0G $ ,5a<r^) ,0(A)) = t (9). (11) V0 G $,0(A)=o0(/i) (10), Lemma4.4, Hyp. 4. This theorem can be generalised to deal with assertions such as |= (j A=>( A Next'fy) |) i=i • and implemented. Chapter 7 Examples This chapter shows that the ideas presented in this thesis can be used in practice. The verifica-tion of the examples done in this chapter requires a relatively rich temporal logic — trajectory formulas are often not rich enough — and efficient methods of model checking. Efficient al-gorithms for performing STE are essential, but in themselves not enough; the compositional theory for TL is necessary. Section 7.1 presents the verification of a number of simple examples performed using the first prototype verification tool. These examples are used as illustrations of the use of the in-ference rules. Section 7.2 presents an example verification of a circuit that is well suited for verification by traditional BDD-based model checkers such as SMV. The B8ZS encoder chip verified has a small state space which is easily tractable by the traditional methods. While the circuit can easily be represented as a partially ordered model, it is difficult to use the methods proposed by this thesis to verify this circuit completely. This example shows some interesting points about the need for expressive logics, and shows some limitations of the approach pro-posed in this thesis. Section 7.3 describes the verification of more substantial circuits, multipliers, which can have up to 20 000 gates. These are circuits that are beyond BDD-based automatic model check-ers and require the use of methods such as composition and abstraction. The verification of a number of different multipliers are described and analysed. One of the verified multipliers is Benchmark 17 of the IFIP WG10.5 Hardware Verification Benchmark Suite. Section 7.4 builds on the verification of this multiplier and shows how its 138 Chapter 7. Examples 139 verification is used in the verification of a parallel matrix multiplier circuit (Benchmark 22 of the suite); the largest version of this circuit verified contains over 100 000 gates. The examples mentioned here all show that these methods are well suited to examples where detailed timings at which events happen is known. Section 7.5 shows how time can be dealt with in a more generalised way. Although this section is more speculative in nature (the verification has not been mechanised) it shows that using the inference rules and inducting over time, allows the practical use of TL in a more expressive way. Finally, Section 7.6 summarises the results of this chapter and evaluates the methods pro-posed. 7.1 Simple Example 7.1.1 Simple Example 1 For the first example, consider the circuit shown in Figure 7.1 which takes in three numbers m , n and o on nodes M, N and O, and produces o + m a x ( m , n) on R. There are three parts to the circuit: a comparator compares the value on M with the value on N and produces H on P if the number at M is bigger than the number on ./V and produces 0 otherwise; a selector takes the values at M, N and O and produces at node Q the value at M if P is set to H, and the value at N otherwise; the third part of the circuit takes the values at node Q and O, and produces their sum at node R. This example is one which could be verified using STE alone, but its small size makes it useful as an example. Verification starts by checking the correctness of the individual components. The verifica-tion of each component is done in the presence of the rest of the system, which means that any unintended interference will be detected. These individual proofs are put together using spe-cialisation, time-shifting and transitivity. An outline of the formal proof follows. To simplify notation, a —> b\c is used as shorthand for (a =$> b) A((->a) =>• c). Chapter 7. Examples 140 M—•/-Figure 7.1: Simple Example 1 Let 4 ' = ([Af]=m)A([/Y] = n)A([0] = o). Let A = A' A Next .4' A N e x t M ' . LetC = Next3(ra>rc [i?] = m + o | [i?] = ra + o). We wish to show that (= <] y l=oC [>. (1) ( = ^ , 4 = > N e X t ( [ P ] = ( m > n ) ) ^ By STE (2) |= §A'A[P] = x =ONext(a; [Q] = m | [Q] = n) |> By STE (3) M ( [ 0 ] = y)A([Q] = *)=^Next([/i!] = y + ^ By STE (4) |= ^Next (^ 'A([P] = m > n ) ) ^ > N e x t 2 ( m > n -> [Q] = m | [Q] = n) [> Time-shift, specialise (2) (5) |= <j .4 =>Next 2(m>n ->• [Q] = m | [(?] = n) [} (1), (4), transitivity. (6) ^^Nex t 2 ( [0 ] = o ) A ( m > n -> [Q] = m | [Q] = n)=>C) Time-shift, specialise (3) (7) \ A^>C \ (5), (6), transitivity. Perhaps the most interesting part of this proof is how specialisation and transitivity are used. Consider how (1) and (2) are combined. Note that A contains all the information that Next A' does; and note the similarity in structure between [P] = ( m > n) and [P] = x. By time-shifting (2) as well as substituting m > n for x transforms (2) into (4), which can be combined with (1) Chapter 7. Examples 141 using transitivity. The other place in which specialisation was used was in line (6). Here, using only substi-tution on line 3 is inadequate; a much richer transformation is needed. Rather than just substi-tuting one expression for z in (3), two different substitutions are made, which are qualified and combined (one substitution is made when m > n, and the other when m <n). This proof was done in the first verification tool, where it is easier to do than manually be-cause the time-shifts and specialisations are found automatically. Steps 4 and 5 are done with a call to one of the automated rules; and steps 6 and 7 with another call to the same rule. A full description can be found in [76], and the FL proof script can be found in Section C l . 7.1.2 Hidden Weighted Bit The hidden weighted bit problem was one of the first to be proved to need exponential space to verify using traditional BDD-based methods [21]. A circuit for an 8-bit version is shown in Figure 7.2. The verification of this was done in the first prototype tool; the proof is outlined here, and a full description including the one page proof script is described in a technical report [76]. -U c 10 a o Counter Chooser Figure 7.2: Circuit for the 8-bit Hidden Weighted Bit Problem In this version, the global input x i , . . . , xn is copied to two buffers. The Counter part of the circuit computes the number of 1 's on the input (i.e. Y,nXi). The Chooser part of the circuit Chapter 7. Examples 142 takes the number j output on CountNode (the number is in binary form, hence if there are n input lines, CountNode comprises [lg nj + 1 lines), and outputs the value Xj on Result and on Error when j > 0. If j = 0 then Error is set to 1. Intuitively, a verification of this circuit as a whole takes exponential time and space (in n) because the output value on CountNode is so complicated, in terms of the boolean variables, that no suitable variable ordering can be found so that the Chooser part of the circuit can be verified efficiently. The virtue of the compositional approach is clearly illustrated: by decoupling the verification of the two parts of the circuit, we can choose suitable individual variable orderings for both parts of the circuit; moreover, it is more efficient to verify the chooser circuit for an arbitrary input j (which only needs very simple BDDs to represent it), and then substitute for j the actual input, than to verify for the actual input (which needs more complicated BDDs to represent it). There are five steps in the proof, in which all the time-shifts and specialisations are found automatically. • The proof that the copying of the input to the buffer is correct — BufferTheorem. • The verification Counter part of the circuit — CounterTheorem. • The composition of BufferTheorem and CounterTheorem. This is done in two steps: first, CounterTheorem is time-shifted along so that transitivity between BufferTheorem and CounterTheorem can be used to produce BufferCounterTheorem. BufferCounterTheorem is conjoined with BufferTheorem so that we can use the value of Buffer! at a later stage. Call the result of this stageITheorem. • Verification of the Chooser part of the circuitry — ChooserTheorem. • Composition of stage ITheorem and ChooserTheorem by time-shifting ChooserTheorem by an appropriate amount and specialising this so that transitivity can be used between Chapter 7. Examples 143 stagel and ChooserTheorem. Results: We verified the circuit for different values of n (4, 8,16, 32, 64,128). For these values, verification takes roughly cubic time (and importantly, space was not an issue). The verification of the 128 bit problem took just under 27 minutes on a Sun 10/51. Compared to this, verification of the system as one unit was not possible for n = 64 or larger. The FL script for the verification is shown in Section C.2 7.1.3 Carry-save Adder The carry save adder (CSA) shown in Figure 6.3 was verified using all three extensions to the STE algorithm described in Chapter 6. Table 7.3 summarises the computational cost of verifi-cation of a 64 bit CSA. Algorithm Time (s) 1 Direct 3.8 2 Testing Machine 3.6 3 Mapping information 2.6 Table 7.3: CSA Verification: Experimental Results The experiments were run on a DEC Alpha 3000, and show that for all three approaches, verification is easily accomplished. The FL script for this is shown in Section C.3. Note that the compositional theory is not used to verify this circuit. 7.2 B8ZS Encoder/Decoder This example shows the verification of a B8ZS encoder, a very simple circuit but one which would be very difficult to do in traditional STE and illustrates some points about the style of verification. Note that the compositional theory is not used to verify this circuit. Chapter 7. Examples 144 7.2.1 Description of Circuit Bipolar with eight zero substitution coding (B8ZS) is a method of coding data transmission used in certain networks. Some digital networks use Alternate Mark Inversion: zeros are encoded by '0', and ones are encoded alternately by '+' and '—'. The alternation of pluses and minuses is used to help resynchronise the network. If there are too many zeros in a row (over fifteen - something common in data transmission) the clock may wander. B8ZS encoding is used to encode any sequence of eight zeros by a code word. If the preceding 1 was encoded by ' +', then the code word '000+-0-+' is substituted; if the preceding 1 was encoded with a '—', then the code word is '000—1-0+-'. Using this encoding, the maximum allowable number of consecutive zeros is seven. The implementation of the circuit is taken from the design of a CMOS ZPAL implementa-tion of the encoder (and corresponding decoder) by Advanced Micro Devices [4]. The encoder comprises two parts. One PAL detects strings of eight zeros and delays the input stream to en-sure alignment. If the first PAL detects eight zeros, the second PAL encodes the data depending on whether eight zeros have been detected or not. Figure 7.3 given an external view of the chip. The inputs are a reset line (active low), and NRZJN which provides the input. There are two outputs, PPO and NPO which as a pair represent the encoding: (1,0) is the '+' encoding of a one, (0, 1) is the '—' encoding of a one, (0,0) encodes a zero, and (1, 1) is not used. Output emerges six clock cycles after input. 7.2.2 Verification There are two questions one could ask in verification: 1. Does the implementation meet its specification? Here we want to check that the output we see on NPO and PPO is consistent with the input. Chapter 7. Examples 145 RST PPO NRZJN NPO Figure 7.3: B8ZS Encoder 2. Does the implementation have the properties that we expect? (Specification validation) In particular is it the case that: • At no stage are there eight consecutive (PPO, NPO) pairs which encode a zero; • At no stage are there fifteen or more consecutive zeros on the PPO output; and • At no stage are there fifteen or more consecutive zeros on the NPO output. Checking that the implementation meets the specification is a bit tricky, and shows the need for a richer logic than the set of trajectory formulas. With trajectory formulas, the obvious way to perform verification is to examine the output and check to see that the output produced is determined by the finite state machine which the PALs implement. However, the equations of the FSM are complicated and non-intuitive. Verification that the implementation is 'correct' doesn't give us information about the specification. Worse, essentially the verification condi-tions would be a duplicate of the implementation, increasing the likelihood of an error being duplicated. And there don't seem to be easier, higher level ways of expressing correctness us-ing trajectory formulas since the circuit has the property that the n-th output bit is dependent on the first input bit. Using the richer logic, a far better way of verifying the circuit is to show that the input can be inferred from the output. Suppose that we want to check the output bit pair at time k (recall Chapter 7. Examples 146 that the output is encoded as the (PPO, NPO) pair). If this bit pair is in the middle of one of the code words then the input bit at time fc — 6 must be a zero; otherwise the (fc — 6)-th input bit can be inferred directly from the value of the bit pair. The testing machine method was used in verification. To test that the bits are correctly trans-lated, the proof first shows that after being reset the encoder enters a set of reachable states, and that once in a reachable state the encoder remains in this set of states. Next, the proof shows that if the encoder starts in the reachable set then the output of the encoder is correct. The com-putational cost of all of this is approximately 30s on a Sun 10/51. The second step is to check that the implementation has properties that cannot be directly inferred from the design. In particular we want to show that at no stage are there eight or more zeros consecutively produced by the encoding of PPO and NPO and also that if we look at PPO and NPO individually that at no stage are there fifteen or more zeros consecutively. These con-ditions can be expressed succinctly in TL, while they could not be expressed as trajectory for-mulas. The major restriction here is that using testing machines, the antecedent can only be a finite formula. We cannot check that this result holds for arbitrary input. What we can show is that given arbitrary input of length n the circuit has the properties we expect. Using testing machines, verification for n = 100 presents no problem (10s on a Sun 10/51). In principle, the direct method could verify the general case. The final verification that was done was to implement the complementary B8ZS decoder and to check that when the output of the encoder is given as input to the decoder, then the output of the decoder is just the input of the encoder, suitably delayed. Again, it was possible using the testing machine method to check this for finite input prefixes. An error was detected: the initial states of the encoder and decoder are not synchronised. If the first eight input bits given the encoder are zero, the code word used by the encoder is '000+-0-+'; however, the decoder expects the other code word to be used if the first eight bits are zero. This error only occurs when Chapter 7. Examples 147 the first eight bits are zero as the state transition table of the decoder has the pleasant property that the first encoded 1 (either a '+' or ' —') emitted by the encoder synchronises the decoder. This example illustrates some interesting points about verification. However, it is not a good example for trajectory evaluation; since the state space of the circuit is quite small (fewer than 20 state holding components), other verification methods work well. 7.3 Multipliers Since BDDs are not able to represent the multiplication of two numbers efficiently [21], auto-matic model checking algorithms find the verification of multipliers very challenging. For this reason, multipliers have received much attention in the literature. The methods proposed in this thesis have been used successfully to verify a number of multipliers: three of these examples are briefly discussed, and then one case is presented in great detail. The section concludes by comparing these verification studies to other work. 7.3.1 Preliminary Work The first multiplier verified using a compositional theory for STE was a simple n-bit multiplier consisting of n full adders. The verification is accomplished by using STE to prove that each adder works correctly, and then by applying the inference rules to show that the collection of adders performs multiplication. The key inference rules used were time-shifting, specialisation, transitivity, and rules of consequence. Of immense practical importance in the prototype tool used to perform the verification was the ability to use a simple theorem prover coupled with some decision procedures to reason about integers. This enabled the tool to break the limitation of BDDs. Also important for ease of use of the system is that specialisations and time-shifts were all found automatically by the tool. Chapter 7. Examples 148 The complete verification of this 64 bit multiplier took just less than 15 minutes of CPU time on a Sun 10/51. For this verification, trajectory formulas were sufficient to express all needed properties. A full description of the verification, including the proof script can be found in a technical report [76]. The next step — the verification of a Wallace tree multiplier [66] — showed the need for a richer logic. A Wallace tree multiplier uses Carry-Save adders (CSA) as its basic components. Example 5.9 indicated that what is important in the verification of a CSA is to show that the sum of the two outputs is the sum of the three inputs. This cannot be represented as a trajectory formula. What trajectory formulas can represent is the particular values of each output, which is not helpful. As a preliminary test, the mapping method was used to extend the expressiveness of tra-jectory evaluation based verification, and the verification completed. The implementation of the prototype algorithm was not particularly efficient, but the need for a richer logic, and the feasibility of the approach was demonstrated. 7.3.2 IEEE Floating Point Multiplier One of the largest verifications done using the theory presented in this thesis is the verification of an IEEE compliant floating point multiplier by Aagaard and Seger [2]. The multiplier, im-plemented in structural VHDL, includes the following features: • double precision floating point; • radix eight multiplier array with carry-save adders; • four stage pipeline; and • three 56-bit carry-select adders. The circuit verified is approximately 33 000 gates in size. Chapter 7. Examples 149 The verification was done using the VossProver, a proof system built by Seger on top of Voss. Based on the first prototype tool discussed here, this implements the theory presented in [78], augmented by using the mapping approach to allow a more expressive logic than tra-jectory formulas. The VossProver contains extensive integer rewriting routines, which are very important in verification proofs. Aagaard and Seger estimate that verifying the circuit took approximately twenty days of work. The computational cost of the verification was reasonable (a few hours on a DEC Alpha 3000). 7.3.3 IFIP WG10.5 Benchmark Example Description of Circuit Benchmark 17 of the IFIP WG10.5 Benchmark Suite is a multiplier which takes two n-bit num-bers and returns a 2n bit number representing their multiplication. This description is heavily dependent on the IFIP documentation.1 Let A = a„_! . . . aiao and B = . . . Mo- Then A x B = Y^To 2i(EJ=o Vmbj). Implementing this is straightforward: the basic operation is multiplying one bit of A with one bit of B and adding this to the partial sum. The component that accomplishes this basic operation takes four inputs: a One bit of the multiplicand, b One bit of the multiplier, c One bit of the partial sum previously computed, CIN A one bit carry from the partial sum previously computed; and computes a * b + c + CIN producing two outputs: S One bit partial sum, and 1 f t p : / / g o e t h e . i r a . u k a . d e / p u b / b e n c h m a r k s / M u l t i p l i e r / Chapter 7. Examples 150 COUT One bit carry. The equations for the output are: S = a Ab ® (c® CIN) COUT = a Ab A c V a Ab A CIN V c A CIN The implementation of the equations (as given in the IFIP documentation) and the graphical symbol used to represent these components is presented in Figure 7.4. S COUT Figure 7.4: Base Module for Multiplier A vector of these components multiplies one bit of B with the whole of A and adds in any partial answer already computed. It might seem appropriate rather than just having a vector of these components to also have an adder which added in carries from less significant columns to the results of more significant bits. The problem with doing that is that each stage would be limited by the need for possible carries from the least significant bit to be propagated to the most significant bit, with concomitant increases in the time and number of gates needed. The approach used in the implementation is to produce two outputs: the first output is the sum of the bit-wise addition of the two inputs, ignoring the carries; and the second output is the Chapter 7. Examples 151 carries of the bit-wise addition. Both of these outputs are forwarded to the next stage; here the carries are added in and new carries generated. We can consider the vector of S outputs as one n-bit number and the vector of COUT outputs as another n-bit number. If we consider stage A; by itself, if the vector of a inputs is x, if the b inputs are all the bit y, and if the vector of c inputs is z, then we shall have that S + 2k+1 COUT = xy + z. (This is something that must be proved in the verification.) These components are arranged in a grid (Figure 7.5 shows how a 4 bit multiplier is ar-ranged). The multiplier contains n stages, each of which multiplies one bit of B with A and adds it to the partial result computed so far. After k stages, n + k bits of the partial answer have been computed. The components making up each stage are arranged in columns in the figure. The components making up a row compute one bit of the final answer; carries from less signifi-cant bits are added in, and any generated carries are output for the more significant rows to take care of. In the Figure 7.5, each of the base components is labelled with indices: i: j indicates that the component is the j-th component of the z-th stage. Having passed through n stages, the full multiplication has been computed. However, as the final stage still outputs two numbers, the carries must now all be added in. Therefore the final step in the multiplier is a row of n — 1 full adders that adds in carries. These full adders are labelled F A in Figure 7.5. The implementation of the circuit was done in Voss's EXE format as a detailed gate-level description of the circuit. A unit-delay model was used, although this is essential neither to the implementation nor the verification. Chapter 7. Examples A(3 0) \Ground (3) <°> com a "0:34 c CIN COUT a " 0 : 2 4 c CIN COUT a *0:ls| c CIN COUT a "0:0*1 c cm COUT a "1:34 c CIN COUT a "1:24 c CIN COUT a " 1 : 1 4 c CIN COUT a "1:04 c CIN 111. COUT ^ 2 : 3 4 c CIN COUT " 2 : 2 4 c CIN COUT " 2 : 1 4 c CIN COUT a 2^:04 c CIN COUT a " 3 : 3 4 c CIN COUT "3:24 c CIN COUT a "3:14 c CIN COUT "3 :04 c CIN <3> 7 4 > FA24 FA,4 F A . 4 (3) Out(7..0) Am Figure 7.5: Schematic of Multiplier Chapter 7. Examples 153 Verification This section presents a detailed description of the verification of the four bit multiplier presented in Figure 7.5. This example is small enough that the complete proof can be described, and this is useful to show how the inference rules are used. However, the example is big enough that there is some tedium involved too; it must be emphasised that in practice the verification is done using FL as the proof script language, which alleviates much of the tedium. It is also worth mentioning that the verification of a four bit multiplier is well within the capacity of trajectory evaluation. Although the proof is not independent of data path width since issues of timing are important, it may be useful to do the verification for a small bit width first using trajectory evaluation by itself. Identifying structure Using the inference rules relies on using the properties of integers to break the limitations of BDDs. Therefore, the first step in the proof is to identify some structure, in particular to identify which collections of nodes should be treated as integers. Notation: BM(i: j)(x) refers to node x in the basic module i: j; FA,(x) refers to node x of the full adder FAi. For each stage, we consider the collection of a inputs as an integer, the collection of b inputs as an integer, and so on . . . Similarly, the collection of S outputs and COUT outputs are both considered as integers. Table 7.4 presents the correspondences. The following bit vector variables are used: a stands for the bit vector ( a 3 , . . . , a 0 ) ; b stands for the bit vector (6 3,... ,b0) (a and b are the inputs to the circuit); c stands for the bit vector (c7,... , c0); d stands for the bit vector (d2,... , d0). If A 7 is a bit vector, then N(i) refers to the z-th least significant bit (so N(0) is the least significant bit), and N(i..j) refers to the (sub)bit vector (N(i),... , N(j)). We also use the Chapter 7. Examples 154 Integer node Vector of bit nodes A The four bit integer input B The four bit integer input 0 Output of the or gate RSi S output of stage i for i = 0,... ,3 (BM(i: 3)(5),... ,BM(i: 0)(S),BM(i - 1 :0)(S),...,BM(0:0)(S)) RC{ COUT output of stage i for i = 0,... ,3 (BM(i: 3)(COUT),... , BM(i: 0)(COUT) RS4 The output Out {0,FA2,... ,FA 0 ,5M(3: 0)(5),... ,5Af(0: 0)(S)) Table 7.4: Benchmark 17: Correspondence Between Integer and Bit Nodes short hand that Rd = d is short for i?C;(2..0) = d (i?C; is four bits wide, d is three bits wide). Defining this correspondence has two advantages: the level of abstraction is raised since the verifier can think in terms of integers rather than bit vectors; and the verifier can use properties of integers to prove theorems without having to convert everything into BDDs. Anomalies in circuit implementation There are a number of aspects of the circuit that can be criticised and improved. The most obvious is that BM(i: 3)(COUT) = 0 for all i. In turn, this means that one of the inputs to the or gate is always 0, i.e. RS4(7) depends entirely on FA2(COUT). The only advantage of this implementation is that it makes the circuit description (slightly) more regular. The cost is the extra circuitry and time required to perform the computa-tion. Furthermore, this implementation makes the proof more complicated. The final step in the proof below will be to show that since RS3 + 2ARC3 = ab that RS4 = ab; this is only true be-cause the one input to the or gate is zero. Therefore, as the proof is constructed, we shall prove that BM(i,3)(COUT) = 0, complicating the proof slightly. A better implementation would have meant a simpler proof. Chapter 7. Examples 155 The Proof Stage 0 The first step is to show the first stage performs the correct multiplication/addition. To make STE as efficient as possible, we use as little information as possible by considering only one bit of b. However, at a later stage we shall want to use all the bits of b, so the next step is to include the rest of b in the result. There are a number ways of doing this. One would be to use the identity rule to show that B has any value imposed on it and then use conjunc-tion with Result 7.1. However, in this case it is easier to use one of the rules of consequence (Theorem 5.18) and strengthen the antecedent. |= By rule of consequence from Result 7.1 This use of the rule of consequence relies on Lemma 5.27, and is motivated by the fact that the antecedent of Result 7.2 uses more information than that of Result 7.1 Stage 1 The first step is to show Stage 1 performs the correct multiplication/addition. Note, the proof is done for arbitrary input for RS0 and RC0 rather than the actual input. This is im-portant because STE is used to do the proof; if the actual input (which is a function of A and B) were used, in general STE would not be able to cope. 21 [RCo] = ab{0) A [RCp] (3) = 0)} (7.1) (7.2) 21[RCo} = ab(0) A [RCo](3) = 0)) Chapter 7. Examples 156 |= By STE <] Global [(3,100)] [A] = a A [B](l) = bi A [RS0] = c(3..0) A [RC0] = d A [i?Co](3) = 0 (7.3) = ^ Global [(6,100)] [fl,S1] + 22[JRCi] = c(3..0) + 2 16/ + 21a6(l) A [i?Ci](3) = 0} In proving this result, STE is used; this implies that BDDs are used to represent data as this is necessary for STE. However, once the proof is done, the result is only stored symbolically, and the BDDs used to represent Result 7.3 are garbage collected. Having proved this, we now combine Results 7.2 and 7.3 using a combination of transitivity and specialisation. This is useful to do since we know something about the values of RS0 and RC0; it is feasible to do since the consequent of Result 7.3 is strictly dependent on the nodes RSo and RC0 — this means that Generalised Transitivity — Theorem 5.31 — can be used. Informally, Theorem 5.31 says that c(3..0) + 21d = ab{0). \= By Generalised Transitivity 4 Global [(0,100)] ([A] = a A [B] = b) => Global [(6,100)] [RSi] + 22[RCl] = ab(0) + 21ab(l) A [RC1}(3) = 0^ Now we have the output of stage 1 solely in terms of a and b. This can be rewritten into a more elegant form. The proving system has integer rewriting procedures which automatically rewrites ab(n—1..0) + 2nab(n) as ab(n..0). Thus applying Lemma 5.24 and the rule of conse-quence, Theorem 5.18, yields the next result: Chapter 7. Examples 157 |= By rule of consequence (] Global [(0,100)] ([A]-a A[B] = b) = > Global [(6,100)] [RSi] + 22[RC1] = ab(1..0) A [RCi](3) = 0\ Stages 2 and 3 The steps are exactly the same as stage 1. |= By STE ^ Global [(6,100)] [A] = a A [B(2)] = b2 A[RS1] = c(4..0) A[RC1} = d A[RC1](3) = 0) (7.6) = > Global [(9,100)] [RS2] + 23[RC2} = c{4..0} + 22d + 22ab(2) A [RC2]{3) = 0) |= By Generalised Transitivity (Results 7.5 and 7.6) 4 Global [(0,100)] ([A] = a A [B] = b) =0> Global [(9,100)] [RS2] + 23[RC2} = ab(l..0) + 22ab(2) A [RC2}(3) = 0\ \= By rule of consequence from Result 7.7 d Global [(0,100)] ([A] = a A[B] = b) Global [(9,100)] [RS2) + 23[RC2} = ab(2..0) A [RC2](3) = 0) (7.7) (7.8) Chapter 7. Examples 158 |= By STE 4 Global [(9,100)] [A] = a A [B] (3) = 63 A [RS2] = c(5..0) A [#C2] = A [i?C2] (3) = 0 (7.9) = 0 Global [(12,100)] [RS3] + 24[RC3] = c(5..0) + 23d + 23a6(3) A[flC3](3) = 0^ (7.10) |= By Generalised Transitivity (Results 7.8 and 7.9) ^ Global [(0,100)] ([A] = aA[B] = b) =£> Global [(12,100)] [RS3] + 24[RC3] = ab(2..0) + 23ab(3) A [RC3](3) = 0ty \= By rule of consequence from Result 7.10 { Global [(0,100)] ([A] = a A [B] = b) => Global [(12,100)] [RS3] + 24[RC3] = ab A [RC3](3) = 0 [> The adder stage The final step in the proof is to ensure that the last, adder stage, adds in the carries correctly. Here possible carries in the least significant bit must be passed to the most significant bit. For large bit widths, this adder stage may take tens or hundreds of nanoseconds, so timing may be important here. (7.11) |= By STE ^ Global [(12,100)] ([#$,] = c(6..0) A [RC3] = d A [RC3](3) = 0) (7.12) =f> Global [(22,100)] ([RS4] = (c(6..0) + 24d)(7..0))) Now, using general transitivity, we have: Chapter 7. Examples 159 Bit width Number of gates D Time (s) T Time (s) 4 135 3.9 5.4 8 473 9.8 15.0 16 1841 36.0 60.8 32 7265 168.7 371.4 64 28865 1081.9 > 6000 Table 7.5: Verification Times for Benchmark 17 Multiplier |= By Generalised Transitivity (Results 7.11 and 7.12) <] Global [(0,100)] {[A] = a A[B] = b) (7.13) => Global [(22,100)] ([RS4] = a * b) ) Again, the automatic rewrite systems recognises that ab is an eight bit number, and so rewrites a * 6(7..0) as a * 6. This concludes the proof. Appendix C has the FL proof script for the multiplier example. Experimental results and comments This IFIP WG10.5 Benchmark 17 multiplier was veri-fied for a number of bit widths (the n bit width case multiplies two n-bit numbers and produces a 2n bit number). The time taken to perform the verification on a DEC Alpha 3000 is shown in Table 7.5: the column labelled 'D Time' shows the time taken using the direct method, and the column labelled 'T Time' shows the time taken using the testing machine approach (all times shown in seconds). These results are useful for evaluating the testing machine approach, and are used in the discussion on testing machines in Section 7.4.4. The proof script itself is short (less than 200 lines, about 50 of which are declarations) and straightforward to write, relying only on simple properties of integers. The full script can be found in Section C.4. Once structure in the circuit is identified by associating integers with collections of bit valued nodes, the verification no longer has to deal with bits, and at no stage Chapter 7. Examples 160 does the verification have to concern itself with how the full adders or the base components are actually implemented. The reason why STE cannot deal with the verification by itself is not because of the size of the circuit; the problem is that there is no good variable ordering for the multiplication of two bit vectors. However, good variable orderings are definitely possible for verifying the individual components of the multiplier with STE, and good heuristics to find good ordering can easily be automated. 7.3.4 Other Multiplier Verification One of the main examples used in this thesis is the verification of a multiplier circuit. To put the thesis work in context, other work on multipliers is surveyed. Multipliers represent an important class of circuit, because arithmetic circuits are in themselves important, and because they are particularly challenging for BDD-based approaches. Simonis uses a simple proof checker to verify a multiplier in [118]. The circuit description is represented in a Prolog-like language, and the correctness proof simulates a hand proof: nine correctness conditions are identified and checked (although it is not proved that these nine con-ditions imply that the multiplier works correctly). Each of the conditions is checked by a Prolog routine. Although the computational costs of verification were low, the correctness of the proof relies on the correctness of the nine conditions and the correctness of the Prolog routines. Tim-ing is not checked. Pierre presents the verification of the WG 10.5 multiplier in [108,109]. The proof is done in the Boyer-Moore prover Nqthm. The work presented is not completely automated in that man-ual work is needed to translate the behavioural description from VHDL into the first-order logic used by Nqthm. The proof itself is based on a methodology supporting induction developed by Pierre for the verification of replicated structures. Provided certain design criteria are met, the Chapter 7. Examples 161 proof can be automatically done by the system. Using replication and induction a general proof can be done for an ra-bit multiplier rather than having to do individual proofs for individual bit-widths. Moreover, the approach is computationally efficient so duplicated work can be avoided. The disadvantage of this approach is that it relies on the VHDL programs being written in a certain way. This is probably not too critical since the restrictions are not unreasonable. More seriously, timing issues are not dealt with. This may be a problem since while the functional-ity is independent of the bit width, timing is not. As timing is an important part of low-level verification, this approach needs further development. Equivalence methods have also been used to verify multipliers. Van Eijk and Janssen use a BDD-based tool to show equivalence between different implementations of multipliers [30]. Their method relies on (automatically) finding structural and functional equivalences between different implementations of the circuit. For some circuits they get excellent experimental re-sults. However, they too do not consider timing. Typically, one of the circuits is derived from the other through a number of design steps; thus, the confidence in the verification depends on the confidence on the correctness of the original circuit. Although the compositional method proposed in this thesis relies on some structure of the circuit being identified, it is not necessary to decompose the circuit, or that clearly defined gross structure be determined. To be useful, it is only important to be able to identify circuit nodes with 'interesting values'; this makes it relatively robust to circuit optimisation. An advantage of the compositional theory is that it incorporates a good model of time, which may be important in many applications. This advantage outweighs the disadvantage of having to verify circuit designs for each bit-width, which theorem proving approaches may obviate. As discussed in Section 2.3.3, Kurshan and Lamport also explored combining theorem prov-ing and model checking, and have applied their technique to verifying multipliers [93]. The work was not fully mechanised, and the implementation of the multiplier given at a high level. Chapter 7. Examples 162 However, although exploratory, their work suggested that combining different approaches would be successful. 7.4 Matrix Multiplier A filter circuit based on a design of Mead and Conway is Benchmark 22 of the IFIP WG10.5 suite [100]. The filter is a matrix multiplication circuit for band matrices. A band matrix of band width w is a matrix in which zeros must be in certain positions (the matrices contain natu-ral numbers), and the maximum number of non-zero items in a row or column is w. This circuit is called 2Syst. Section 7.4.1 discusses the specification of the circuit; Section 7.4.2 discusses its implementation; Section 7.4.3 presents its verification; and Section 7.4.4 analyses and com-ments on the verification in which a significant timing error was discovered. Sections 7.4.1 and 7.4.2 rely heavily on the benchmark documentation.2 7.4.1 Specification The suite documentation does not give a general specification of the circuit (it does give a gen-eral implementation), but presents the case of w = 4. A circuit implemented for a band-width of w can be used to multiply matrices of any size — larger matrices just take longer to multiply; the documentation does not consider the general case, and gives only a specification for 4 x 4 matrices. Let A and B be the two 4 x 4 matrices given below: 2The URL for the documentation is f tp: //goethe. i r a .uka. de/pub/benchmarks/2Syst/. This section is based on the documentation of this benchmark dated 16 November 1994. As a result of this research, the documentation has been revised, and the new version will be released shortly. Chapter 7. Examples 163 « 1 1 « 1 2 0 0 hi bu &13 0 « 2 1 « 2 2 « 2 3 0 B = b2i b22 ^23 &24 ^31 Ct32 « 3 3 « 3 4 0 b32 ^33 ^34 0 « 4 2 « 4 3 a 4 4 0 0 C>43 6 4 4 and let C — A x B be the matrix: C Cn Cl2 C i 3 C14 C21 C 2 2 C 2 3 C 2 4 C 3 1 C32 C 3 3 C 3 4 C 4 i C 4 2 C 4 3 C 4 4 The external interface of the 2Syst circuit is shown in Figure 7.6. The coefficients of matrix A are input on the inputs aO, . . . , a3, the coefficients of B are input on bO, . . . , b3, and the coefficients of C, the result, is output on outputs cO to c6. (What this picture, taken from the documentation, does not show is that the circuit is clocked and there should be a pin for clock input too.) Figure 7.6: Black Box View of 2Syst Chapter 7. Examples 164 Timing The timing of when and where the inputs must be applied and the outputs become available is critical. The timing for the inputs is presented in Table 7.6. In clock cycles 0 to 3, all the inputs are initialised by having zero applied to them. Then, for the next ten cycles the matrix coefficients are input to the circuit. For example, in cycle 9, the coefficients a 2 3 , 042, b32 and 642 are input on pins aO, a3, bO, and b3 respectively, while all other pins have zero applied to them. clock aO al a2 a3 bO bl b2 b3 0 - 3 0 0 0 0 0 0 0 0 4 0 flu 0 0 0 611 0 0 5 0 0 0 0 0 612 0 6 «12 0 0 « 3 1 621 0 0 613 7 0 «22 0 0 0 622 0 0 8 0 0 0-32 0 0 0 623 0 9 a 2 3 0 0 « 4 2 632 0 0 b24 10 0 « 3 3 0 0 0 633 0 0 11 0 0 «43 0 0 0 634 0 12 0.34 0 0 0 643 0 0 0 13 0 044 0 0 0 644 0 0 Table 7.6: Inputs for the 2Syst Circuit Table 7.7 shows when and where the coefficients of the output matrix can be found. The specification gives some freedom in timing here. It requires that the output be given in clock cycles t0,. . . , h, but does not specify values for the tj \ and, while t0 < ti... < t6, the tj need not be consecutive clock cycles. This gives some latitude in the implementation of the circuit. 7.4.2 Implementation The matrix multiplication C = AxB can be defined in different ways. Assuming for simplicity that A and B are both r x r matrices, the usual definition of C is through defining each C{3- = Chapter 7. Examples 165 cycle cO Cl c2 c3 c4 c5 c6 to C l l U Cl2 c 2 i h Cl3 C22 C31 C i 4 C23 C32 c 4 i u C33 C42 h C34 C43 te C44 Table 7.7: Outputs of the 2Syst Circuit Zll=i aikbki- An alternative definition is useful in implementing parallel hardware to perform the multiplication: matrix multiplication can also be defined by the recursive equation 7.14. c{1) = 0 _ (H-i) The entries in arrays A and B are n-bit numbers. If the band-width of the matrices is w, the maximum number of non-zero terms in any c , j is w, which means that each entry in c , j is of bit-width m = 2n + r — 1. The basic operation of Equation 7.14 is performing an addition and a multiplication; this is modelled in the implementation, where the basic cell has an integer multiplier and adder to perform this. The external interface of these cells is shown in Figure 7.7. The cell has three inputs: C_In is an m bit number, containing a partial sum; and A_In and B_In are n bit data which are either zero or coefficients of the A and B matrices. A_Out, B_Out are two n-bit output values and C_Out is an m-bit output. If in one clock cycle A_In, B_In and C_In have the values a, b and c respectively, then at the start of the next cycle: A_0ut = a, B_0ut = b, C_0ut = ab + c. Thus, the cell has two purposes: it acts as a one clock-cycle delay buffer for coefficients of the matrices (which are passed on to neighbouring cells), and performs the basic operation of Chapter 7. Examples 166 C _ O u t ( m . . l - 0) A _ I n ( n . . l - 0) B _ I n ( n . . l - 0) \ B _ O u t ( n . . l - 0) A _ O u t ( n . . l - 0) C _ I n ( m . . l - 0) Figure 7.7: Cell Representation an addition and multiplication. Figure 7.8 shows how the cells are implemented. Each cell contains a multiplier, an adder, and three registers. The multiplier is the one discussed and verified in the previous section, and the adder is a conventional 2n-bit adder. Each register has an input, an output, and a clock and select pin. By connecting the select and clock pins to the same global clock, the registers become positive-edge triggered: when the clock rises the value at the register's input is latched, output, and maintained until the clock rises again. These cells are connected in a systolic array: each clock cycle cells performs an addition and multiplication and then passes its results to its neighbours for use in the next cycle. The cells are arranged as presented in Figure 7.9, and the timings given in Table 7.6 are designed so that cells get the right inputs at the right time. A simple example will illustrate how this works. To help the description, each cell in the systolic array has been labelled by i: j. The circuit is implemented in Voss's EXE format as a detailed gate level description, using a unit delay model. The implementation is based on the VHDL program given in the benchmark suite documentation. Chapter 7. Examples 167 A_In-B_In-Mult C_In- -h m Adderj-n A_Out Reg Reg n B_Out m . C_Out clock Figure 7.8: Implementation of Cell Example 7.1. Consider the computation of c 2 i = a2i&n + a22b2i- In the first three clock cycles the circuit is initialised so that at the start of the fourth cycle, all inputs have value zero. Cycle 4: &n is input on b l (input B_In of Cell 1:0). (a u is also input in this cycle, but in the example, we only consider values contributing to c 2i). Cycle 5: Cell 1:0 will have passed bn to its neighbour, so that 6n now becomes an input for Cell 1:1. a 2 1 is input on a.2 (the A_In input of Cell 0:2). Cycle 6: Cell 1:1 will have passed bn to the B_In input of Cell 1:2, and Cell 0:2 will have passed an to the A_In input of Cell 1:2. At this stage, the C_In input of Cell 1:2 has the value 0. Cell 1:2 therefore computes a n b n . At the same time, b2\ appears as input on bO, which is input B_In of Cell 0:0. Chapter 7. Examples 168 Cycle 7: Cell 1:2 will have passed anbn to Cell 0:1 as its C_In input. Cell 0:0 will have passed on £>2i to Cell 0:1 as its B_In input. a 22 appears on a l , which is the A_In input of Cell 0:1. Cell 0:1 computes anbn + 022^21 • Cycle 8: Cell 0:1 outputs anbn + a2ib2\ on its C-Out port (which is c4). c3 aQ , 1 , JaO 0:0 1:1 c2 J — .bi 1:0 2:2 q l 2 :1 3:3 2 : 0 3 : 1 Jo2 cO 1 Figure 7.9: Systolic Array 7.4.3 Verification The verification task can be divided into two parts, the verification of the individual components, and using the verification of the components to show that whole array is correct. Chapter 7. Examples 169 Verifying the Cells The verification of a cell must show the multiplier, adder and registers all work correctly. Each cell must be verified individually. This section describes the verification of Cell u:v, and as-sumes for the sake of this exposition that the clock cycle is 200ns, and the bit-width is 4. In the discussion below, the A_In„„ and B_Inu„ are four-bit nodes, while all variables are 12 bit values. To simplify notation, in all the discussion below, a and b are short hands for a(3..0) and6(3..0) respectively. It turns out that it useful to divide this proof into three parts: • Given value a on A_Inu„, b on B_In w , and c on C_In u l ), one clock cycle later a * b + c appears on C_0utu„; • Given value a on A_Inu„, one cycle later a appears on A_0utulI; and • Given value b on B_Inu„, one cycle later b appears on B_0utu„. When the cells are connected together, port CJlnuv is connected to C_0ut( u + 1)(„ + 1), port A_0utu„ is connected to A_Inu(„+i), and B_0utut, is connected to B_In( u +i)„. Therefore, the above veri-fication conditions are rewritten as: • Given value a on A_In„„, b on B_In„„, and c on C.Out^+ij^+i), one clock cycle later a * 6 + c appears on C_0utu„; • Given value a on A_In u„, one cycle later a appears on A_Inu(^+1); and • Given value b on B_In„^, one cycle later b appears on B_In(„+i)„. Of course, it is possible to combine all three into one, stronger result. However, having three weaker results makes the proof more flexible since at some stages the proof needs only the weaker result, and using a stronger result would clutter things up and be more inefficient. The costliest part of the proof is to show the multiplier works correctly. As Section 7.3.3 showed how the Benchmark 17 multiplier can be verified, for the purpose of this section, Re-sult 7.15 is assumed (in the actual verification, the multiplier for each cell is reverified). Chapter 7. Examples 170 |= By various rules d Global [(0,100)] ([k-Inuv] = a A [B.Inuv] = b) (7.15) Global [(22,100)] ([Ouv] = a*b)) In the cell, the clock has an important effect; to include information of when clocking happens, the rule of consequence is often used to strengthen the antecedent of a result. For convenience, let Clockk = Global [(200&, 200Jc + 99), (200(£: + 1), 200(£: + 1) + 99)] ([clock] =f) A Global [(200A; + 100,200/c + 199)] ([clock] = t) which is the information about clocking which is needed in the proof of the A;-th cycle. This formula says that the clock is low from time 200fc to time 200/?+99, then high from time 200A;+ 100 to 200A; + 199, and then low again from time 200& + 200 to 200A; + 299. Using this idea, Result 7.15 is transformed strengthening the antecedent, as well as taking into account the input on C_In. Although, this is not useful for its own sake, it is useful in using the essence of Result 7.15. |= By Theorem 5.7 d Global [(0,100)] [A_In t t„] = a A [B_In„„] = 6 A [C_0ut („ + 1 ) („ + 1 )] = c A Clocks (7.16) => Global [(22,100)] ([•„„] = a * 6) } In the next step we show that the adder works correctly and that the output of the adder is latched for the appropriate time. This can be done with one trajectory evaluation. Note that the time Chapter 7. Examples 171 interval in the consequent could be made bigger, but the one given suffices. |= By STE \ Global [(22,100)] ([0U„] = d(7..0> A [ C _ 0 u t ( u + 1 ) ( „ + i ) ] = c A Clock0) (7-17) = 0 Global [(200, 300)] ([C_0utu„] = c + d(7..0)) ) Results 7.16 and 7.17 are now combined by specialising the latter result (substituting ab for d), and using transitivity. Note that this is just a special case of General Transitivity (Theorem 5.31). |= By Theorem 5.31 <] Global [(0,100)] ([A_Inu„] = a A [B_Inuv] = 6 A [ C _ 0 u t ( u + 1 ) ( „ + 1 ) ] = c A Clock0) (7.18) = > Global [(200,300)] ([C_0utu„] = c + a * b) ) Result 7.18 is the core result that has to be proved about the cell. The next two results show that the cell also acts as one cycle delay buffers for values of the A and B matrices. Both of these results can easily be done using STE alone. |= By STE \ Global [(0,100)] ( [A_In u „]=a A Clock0) (7.19) = 0 Global [(200,300)] ([A_In u ( t ; + 1 )] = a) ) \= By STE \ Global [(0,100)] ([B_Inw] = b A Clock0) (7-20) = > Global [(200, 300)] ( [B_In ( u + 1 ) „] = b) \ Overall Verification Once each of the cells has been individually verified, the proofs about the individual cells must be combined to prove that the systolic array as a whole works correctly. The proof is modelled on how the systolic array computes its results; in its development the Chapter 7. Examples 172 proof traces the behaviour of the circuit as it uses its inputs, computes results, and outputs the answers. Consider the operation of one cell, Cell u:v. It has three input neighbours from which it gets values (the boundary cells are special cases and easily taken care of): • Cell u:(y — 1), its A-left-neighbour from which it gets a value of the A matrix, • Cell (u — l):v, its B-right-neighbour from which it gets a value of the B matrix, and • Cell (u + l):(v + 1) its C-down-neighbour from which it gets a partial sum; and three output neighbours to which it gives values: • Cell u:(v + 1), its A-right-neighbour, to which it gives a value of the A matrix, • Cell (u + l):v, its B-left-neighbour, to which it gives a value of the B matrix, and • Cell (u — l):(v — 1) its C-up-neighbour, to which it gives a partial sum; At the beginning of clock cycle k, none, some, or all of the following will be known about Cell u:v's input neighbours (recall that a clock cycle is 200 time units long), where the Ij are antecedent TL formulas, and the 6X are integer expressions: |= 4/ 1=t>Global [(200fc,200fc + 100)] [kJ.nuv] = 6a ) (7.21) |= \ I 2 ^ G l o b a l [(200A;,200fc + 100)] [B_In^] = ^ 6^ (7.22) |= \ Ix =t>Global [(200fc, 200A; + 100)] [C_0ut ( „ + 1 ) ( „ + 1 ) ] = 6C \ (7.23) If all three results are known, then we use conjunction on Results 7.21-7.23, and introduce new clocking information. For convenience, let I4 = Ii A I2 A I3 A Clockk-This is the conjunction of Iu I2 and I3 and contains necessary clocking information for the k-ih Chapter 7. Examples 173 cycle. Then we have: (= By Conjunction and Rule of Consequence (7.24) Global [(200A;, 200A; + 100)] [A_Inu„] = 6>a A [B_In^] = 6>6 A [C_0utu„] = 6C. ) Then Result 7.18 is time-shifted forward by &-clock cycles to get: |= By time-shifting d Global [(200Jfe, 200A; + 100)] ([kJ.nuv] = a A [B_Inu„] = 6 A [C_0ut(u+1)(„+i)] = c A Clockk) =T> Global [(200(fc + l),200(fc + 1) + 100)] ([CJDutuv] = c+ a * b) |> Using General Transitivity on Results 7.24 and 7.25 leads to: |= By Theorem 5.31 i h (7-26) =D> Global [(200(A: + 1), 200(fc + 1) + 100)] ([C_0utu„] = 9C + 6a * db)) This is a proof of what Cell u:v computes in the A;-th cycle. In proving what happens in the (k + l)-th cycle, Result 7.26 is used in the proof of the behaviour of Cell (u + l ) : ( u +1), which is Cell u:u's up-C-neighbour. Similarly, if Result 7.21 is known, then precondition strengthening is used to introduce new clocking information to get: |= By Theorem 5.7 <| h A Clockk (7.27) => Global [(200fc, 200& + 100)] [A_Inu„] = 6a ) Then Result 7.19 is time-shifted by k clock cycles to get: Chapter 7. Examples 174 (= By STE <] Global [(200fc, 200k + 100)] ([A_In„„] = a) A C/ocfcfc Global [(200(fc + l),200(fc + 1) + 100)] ( [A_In u ( „ + 1 ) ] = a) ) General Transitivity between Results 7.27 and 7.28 then yields: (7.28) (= By Theorem 5.7 | / i A Clockk (7.29) Global [(200(fc + l),200(fc + 1) + 100)] ([A_Inu ( t,+ 1 )] = 9a) ) This shows what Cell u:v passes to its A-right neighbour at the end of the A;-th cycle, and this result will be used to prove properties of Cell u: (v + 1) in the (k + l)-th cycle. A similar result shows that in the fc-th Cell u:v also passes on the value input on its B_In port, (= By various rules FL Proof script The FL proof script that performs the proof uses the approach outlined above. First, the behaviour of each cell is individually verified. Then, the proof proceeds by proving properties of the circuit in each clock cycle. A two dimensional array of proofs is kept: at the start of the k-ih cycle, the array's (it, v) entry contains proofs of what the output of Cell u:v input neighbour's are at the end of the (k — l)-th cycle. The proof then uses this information to infer as much as possible about the output of Cell u:v at the end of the fc-th cycle, and this information is then used to update the array of proofs so that Cell u:v's output neighbours can use this information in the (k + l)-th cycle. ^ I2 A Clockk (7.30) Global [(200(fc + 1), 200(A; + 1) + 100] ( [B_In ( u + l H ] = dh) ) Chapter 7. Examples 175 Start ol cycle c_0 Cell 3:0 C _ l Cell 2:0 c_2 Cell 1:0 c_3 Cell 0:0 c_4 Cell 0:1 c_5 Cell 0:2 c_4 Cell 0:3 7 Cll 8 Cl2 c 2 i 9 Cl3 C31 10 C14 C22 c 4 i 11 C23 C32 12 C24 C42 13 C33 14 C34 C43 15 16 C44 Table 7.8: Benchmark 22: Actual Output Times 7.4.4 Analysis and Comments The FL proof script uses STE and the inference rules to prove what the output of the circuit is at different stages - this is summarised in Table 7.8. Comparison between Tables 7.7 and 7.8 shows that even given the ability for the designer to choose the values of t\,... ,te, the implementation does not meet the specification. There are two possibilities. The easier and probably better solution would be to change the specification, in accordance with the results shown in Table 7.8. However, another solution would be to place one cycle delay buffers on the outputs c_0, c_ l , c_5 and c_6; the amount of extra circuitry is small, would not slow down the circuit, and would lead to a more elegant specification. The proof script, including the proof of the correctness of all the multipliers and declara-tions, is approximately 500 lines long, of which about 100 lines are declarations. The proof script can be found in Section C.5. The program itself is straightforward, although the use of a two dimensional array does not show off a functional, interpreted language at its best. The com-plete verification of a 4 x 4 systolic array of 32 bit multipliers (roughly 110 000 gates) takes just over 10 hours of CPU time on a DEC Alpha 3000 using the testing machine approach, and Chapter 7. Examples 176 just under three hours using the direct method. This verification uses the testing machine algorithm for STE, showing the weakness of us-ing testing machines. The data structure needed to represent the model of the circuit is approx-imately 4M in size, making composition of circuit and testing machines difficult. While other implementations of machine composition are possible, the sheer size of the circuits remains an inherent problem. A similar problem can be seen in the verification of the multiplier (Table 7.5). Since both the size of the circuitry and the number of trajectory evaluations is quadratic in the bit-width, if every time trajectory evaluation must be done, circuit composition must be too, the resulting algorithm will be at least quartic. This explains why the verification of large bit widths becomes so expensive for testing machines. The second part of the verification — showing that when connected together the multipliers produce the correct answer — is essentially performing symbolic simulation. Zhu and Seger have shown that given a set of trajectory assertion results, there is a weakest machine which satisfies these assertions [130]; this weakest machine is a conservative approximation of the circuit as any assertion that is true of the approximation is also true of the circuit. This suggests an alternative verification methodology. The verification of the correctness of each of the multipliers extracts the essence of the behaviour of the circuit. From these assertions it should be possible to automatically generate a conservative approximation of the entire sys-tolic array. This representation of this approximation would not use BDDs; in fact it would be at a higher level of abstraction. STE could then be used on the verification of the entire systolic array. Chapter 7. Examples 111 7.5 Single Pulser This example shows how the fundamental compositional theory introduced in Chapter 5 can be built on; particularly through the use of induction on time, composite, problem-specific infer-ence rules can be developed. 7.5.1 The Problem Johnson has used the Single Pulser — a textbook example circuit — to study different verifica-tion methods [88]. The original problem statement for the circuit is: We have a debounced pushbutton, on (true) in the down position, off (false) in the up position. Devise a circuit to sense the depression of the button and assert an output signal for one clock pulse. The system should not allow additional assertions of the output until after the output has released the button. Johnson reformulates this into: • the pulser emits a single unit-time pulse on its output for each pulse received on i, • there is exactly one output pulse for every input pulse, and • the output pulse is in the neighbourhood of the input. Figure 7.10 illustrates the external interface of the pulser. The port In is the button to be pressed (if it has the value H, the button is pressed, if L then it is not), and Out is the output. In Out Figure 7.10: Single Pulser Chapter 7. Examples 178 Johnson presents the verification of this circuit in a number of different systems. This sec-tion attempts the verification using the compositional theory of STE. This attempt is not as gen-eral as some of Johnson's approaches since the specification is very specific about the timing of the output with relation to the button being pressed. 7.5.2 An Example Composite Compositional Rule The motivation for the lemma below is that the essence of the behaviour of the pulser can be described by three assertions that show how the pulser reacts immediately to stimulation. By using induction over time, these results can be combined and generalised. Lemma 7.1. Let s,t, and u be arbitrary integers such that 0 < s < t < u. Suppose: 1. |= { J - . c / i ^ N e x t / i ! 2. |= (] (-ic/i A Next 0 i )=>(Next 2 / i 2 ) |>, and 3. (= <]0i=D>Next 2/ii [>; then 1. h 4 Global [(s,t)] (-.0i)=t>Global[(s + l,t + 1)] hi }. 2. f= ^ (Global [(s,t)] (-i0i) A Global [(t + 1 , « ) ] gx) =o (Global [(s + l,t + 1)] hi) A (Nex t ( i + 2 ) / i 2 ) A (Global [(t + 3, u + 2)] / i 2 ) } Prao/ The proof of 1 comes straight from Corollary 5.23. For 2, let s, t, and u be arbitrary natural numbers such that s < t < u. Chapter 7. Examples 179 (1) |= (| G l o b a l [(s, t)] (^gi)=t> G l o b a l [(s + l,t + 1)] hi } F r o m hypothesis (1) by L e m m a 5.22 (2) (= ^Next*(-«5fi) A Nex t (* + 1 ^i=> N e x t ( i + 2 ' / i 2 } Time-shi f t ing hypothesis (2) (3) |= <] G l o b a l [t+1, u)] ^ = 0 G l o b a l [(i + 3, u + 2)] /ix [> F r o m hypothesis 3, by L e m m a 5.22 (4) |= { ( G l o b a l [(s, t)] (-.gi) A G l o b a l [(t + 1, u)] flr,) = > ( G l o b a l [(s + 1, i + 1)] A N e x t ( < + 2 ' / i 2 A G l o b a l [(t + 3,u + 2)] /»2)} Conjunc t ion o f (1), (2), (3) 7.5.3 Application to Single Pulser G i v e n a candidate c i rcui t , it should be possible to use S T E to ver i fy the f o l l o w i n g three prop-erties: 1. |= i ( i [ l n ] ) = > N e x t (-"[Out]) ); 2. \= <| (-.[In] A N e x t [ln])==T> (Nex t 2 [0u t ] ) ) , and 3. |= { [ l n ] = O N e x t 2 ( - i [ 0 u t ] ) |). U s i n g these results, the above l e m m a can be i n v o k e d to show that 1. |= ^ G l o b a l [(s,t)] ( - i [ l n ] ) = C > G l o b a l [ ( s + l,t + 1)] --[Out] [>, and 2. |= 4 ( G l o b a l [(s, t)] (--[In]) A G l o b a l [(t+1, u)] [ in]) =D> ( G l o b a l [(s + l,t + 1)] (-"[Out]) A N e x t ( i + 2 ) [ O u t ] A G l o b a l [ ( < + 3,u + 2)] (-.[Out]))} Chapter 7. Examples 180 The first result says that if the input does not go high (the button is not pushed), then the output does not go high. The second result says when the button is pushed (input goes from low to high), the output goes high for exactly one pulse and then goes low and stays low at least as long as the button is still pushed. I argue that these two properties capture the intuitive specification of Johnson. However, the specification is more restrictive; there are valid implementations that satisfy Johnson's specifi-cation which would not pass this specification, showing the limitations of our current methods. It is possible to give a more general specification based on Johnson's SMV specification3, but currently there are not efficient model checking algorithms for these specification. 7.6 Evaluation The experiments reported in this chapter showed that the compositional theory can be success-fully implemented in a combined theorem prover-trajectory evaluation system, thereby enabling circuits with extremely large state spaces to be fully verified with reasonable human and com-putational costs. The following table summarises the examples verified (in the size column, n refers to the bit-width). Description of circuit How verified Approx. size (gates) Simple comparator STE/Compositional Theory 0(n2) Hidden weighted bit STE/Compositional Theory 0{n2) Carry-save adder STE 200 B8ZS encoder STE 75 IEEE floating point multiplier STE/Compositional Theory 33 000 Simple 64-bit multiplier STE/Compositional Theory 25 000 Benchmark 17 multiplier STE/Compositional Theory 28 000 Benchmark 22 systolic array STE/Compositional Theory 115 000 3Note that although the timing constraints in the SMV specification are more general, this SMV specification is also implementation dependent — in particular, it requires some knowledge of the internal structure of the im-plementation, which this proof does not. Chapter 7. Examples 181 In using the verification system, a key issue is the user interface to the system. Both the STE and the other inference rules are provided in one common, integrated framework. This not only makes it easier for the human verifier to use, but reduces the chance of error. Providing STE as an inference rule for the theorem prover to use proved useful. The ability to use FL as a script language was extremely important for increasing flexibility and ease of use. The method of data representation proved to be very successful. It allowed BDDs to be used where appropriate, and other representations where BDDs are inappropriate. Decision proce-dures and other domain knowledge are critical for the success of the approach. The results presented show that the increased expressiveness of TL not only allows a richer set of properties to be expressed, but can make specification cleaner too. This chapter also shows that all three extensions to STE are feasible and can be applied suc-cessfully. However, both the testing machines and the mapping method have significant draw-backs in different circumstances. Testing machines are not appropriate to use when the circuit being verified is very large, and when a number of trajectory evaluations will be run requiring different testing machines. Although the cost of automatically constructing testing machines is reasonable, the overhead of performing circuit composition repeatedly can be very large. On the other hand, once the new circuit is constructed, trajectory evaluation is efficient, and therefore the method may be appropriate where only a few trajectory evaluations will be done, and where the consequents are complicated. The mapping method suffers from the need to introduce extra boolean variables. This is particularly the case when wishing to show that a state predicate holds for a sequence of states, where although the individual states are different, the relationship between state components stays constant. For example, we may wish to show that for a sequence of n states, at any time exactly one of m of the state's components have a H value. Using the mapping method would Chapter 7. Examples 182 require the introduction of nm variables. A different example is the B8ZS verification, where we wish to show that too many zeros do not appear consecutively. The testing machine and direct methods require no new boolean variables; the mapping method would require two new boolean variables for each time step. Chapter 8 Conclusion Verification of large circuits is feasible using the appropriate logical framework. Chapters 3, 4 and 5 presented such a framework. Chapters 6 and 7 showed how this theory can be success-fully implemented and illustrated the method by verifying a number of circuits. A summary of the research findings is given in Section 8.1, and some issues for future research is given in Section 8.2. 8.1 Summary of Research Findings 8.1.1 Lattice-based Models and the Quaternary logic Q The motivation of model checking is to use a logic to describe properties of the model of the system under study, and to verify the behaviour of the model by checking whether the properties (written as logic formulas) are satisfied by the model. The key questions are: how the model is represented; which logic is used; and how satisfaction is checked. Using a lattice model structure has significant advantages for automatic model checking. By using a partial order to represent an information ordering, much larger state spaces can be modelled directly than with more traditional representation schemes. Previous work described earlier showed the advantage of this method of model representation. This information ordering has a direct effect on what can be known about the model. A two-valued propositional logic is too crude a tool to use — it must conflate lack of knowledge with falseness. This is not only wrong in principle; the technical properties of a two-valued logic 183 Chapter 8. Conclusion 184 make it impossible to support negation fully. The quaternary logic Q is suitable for describing the state of lattice-based models since it can describe systems with incomplete or inconsistent information. This makes it possible to distinguish clearly between truth and inconsistency, and falseness and incomplete information. Moreover, it supports a much richer temporal logic. On the whole, the use of Q has been very successful. However, there are some minor points which need some attention. As discussed in Chapter 3 the definition of Q given in Table 3.1 on page 49 is not the only one possible. For example, in the definition given here, J_ V T = t. This definition is not without its problems — although it does have the advantage of very efficient implementation, it complicates some of the proofs and, notwithstanding the usual intuitive mo-tivation, seems difficult to justify in the context of a temporal logic. J_ V T = T, would seem to be a better definition. In order to keep monotonicity constraints this would necessitate defin-ing t V T = T too. These redefinitions would mean that disjunction in Q would not be the meet with respect to the truth ordering of Q. Which would be the better definition is not clear; more theoretical and practical work must be done. 8.1.2 The Temporal Logic TL Q can only describe the instantaneous state of a model. The temporal logic TL uses Q as its base, and can describe the evolving behaviour of the model over time. Note that the choice of Q as the base of the temporal logic leaves much freedom in choosing the temporal operators of a temporal logic, and other temporal logics could be built on top of Q. Previous temporal logics proposed for model checking partially ordered state space could not be as expressive as TL because they were based on a two valued logic. In particular TL supports negation and disjunction fully. In the examples explored in this thesis, the expressive-ness of the logic was quite sufficient (the problems encountered with some of the verifications Chapter 8. Conclusion 185 were caused by limitations in the shortcomings of the model checking algorithms). Neverthe-less, whether introducing new temporal operators is worthwhile is an interesting one, especially if the model structure were extended (see Section 8.2.1). 8.1.3 Symbolic Trajectory Evaluation STE has been used successfully in the past for model checking partially ordered state spaces. However, previous work only supported a restricted temporal logic. The thesis showed that the theory of STE could be generalised to deal with the whole of TL, and a number of practical algorithms were proposed for model checking a significant subset of TL. In particular, the four-valued logic of Q proved a good technical framework for STE-based algorithms. 8.1.4 Compositional Theory The increase in expressiveness makes the need to overcome the performance bottlenecks of model checking more alluring and more important computationally. One of the primary con-tributions of the research is the development of a sound compositional theory for STE-based model checking using TL formulas. A set of sound inference rules can be used to deduce re-sults: the base rule uses STE to verify a property of a model; the other rules can be used to combine properties previously proved. At a practical level, the compositional theory can be used to implement a hybrid verification system that uses both theorem proving and model checking for verification. BDD-based model checking algorithms are extremely effective in proving many properties. However, there are inherent computational limits in what these methods can do; by using a theorem prover which implements the compositional theory, these limits can be overcome to a great extent. By pro-viding automatic assistance, increasing the level of abstraction, and, most importantly, by pro-viding a powerful and flexible user interface to the theorem prover (through FL), the task of the Chapter 8. Conclusion 186 human verifier using the theorem prover can be made easier. Features of this approach are: • An appropriate verification methodology can be applied at the appropriate level — model checking at the low level, theorem proving at a higher level. • STE supports a good model of time. This makes it suitable to verify not only functional correctness, but many timing properties. • In the verification, although the implementation is given at a low level (e.g. at the gate or switch level), the correctness specification (viz. the TL formulas used) is, through the use of data abstraction, at a fairly high level. • User intervention is necessary. Low level verification through STE, and important heuris-tics in the theorem proving component are important in alleviating the burden the verifier might otherwise encounter. To illustrate the effectiveness of the approach, a number of circuits were completely ver-ified. The largest of these circuits is one of the circuits in the IFIP WG10.5 Benchmark suite and contains over 100 000 gates. A serious timing error was discovered in the verification. This experimental work showed that increasing the expressiveness of the temporal logics that STE supports not only means that more properties can be expressed, but that through the use of the compositional theory, is computationally feasible. 8.2 Future Research The research has raised a number of research issues, and left some questions only partially an-swered. Chapter 8. Conclusion 187 8.2.1 Non-determinism The lattice structure of the state space means that although the next state function is determinis-tic, non-determinism can be implicitly represented through the use of X values. Although suit-able for dealing with non-deterministic behaviour of inputs of circuit models, this treatment of non-determinism is not very sophisticated. One avenue of research would be to investigate the possibility of incorporating non-determinism explicitly within the model structure by replacing the next state function Y with a next state relation. Whether the semantics would be linear or branching time needs exploration, although I conjecture that a branching time semantics would be more suitable. Trees, rather than sequences or trajectories, would be used to model behaviour (and properties verified using symbolic trajectree evaluation). This would clearly raise the issue of the expressiveness of TL, and the need for operators that express path switching. 8.2.2 Completeness and Model Synthesis The work of Zhu and Seger [ 130] showed that the compositionality theory for trajectory formu-las [78] with minor modification is complete in the following sense. If K is a set of assertions, there is a weakest model M such that each assertion in K holds of the model. Moreover, any assertion that is true of M can be derived from K using the compositional theory. Whether the same thing is true of the compositional theory for TL needs further investigation. This question is important from a practical point of view. Being able to construct such a weakest model from a set of assertions can be very useful for specification validation. It can also be used for verification, as discussed in Section 7.4.4, where a possible verification strategy for the verification of systolic array multiplier was outlined. After proving that the individual base modules of the circuit work correctly, it should be possible to construct a model (the extracted model) of the circuit from the set of assertions proved of the base modules; these assertions extract out the essential behaviour of the circuit. Then, the overall behaviour of the circuit can Chapter 8. Conclusion 188 be verified by performing STE on the extracted model. This raises the question of how to execute temporal logics efficiently, which involves inter-esting theoretical and practical questions (see [57] for an introduction). The key in making this efficient is, I conjecture, that the appropriate data structures should be used for representing the extracted model. In particular, given that BDDs are a very good representation of bit-level de-scriptions of the circuit, it is unlikely that using a BDD representation for the extracted model will gain significant improvement in performance, and for a multiplier circuit, it will certainly fail. Rather, the extracted model should be used as a method for finding a higher-level descrip-tion of the circuit. For example, in the case of the array multiplier, an integer level description would be suitable. Even using a non-canonical representation of integers would allow STE to be accomplished in this particular case. What is important is that it should be easy to apply domain information to the problem. Note that from a practical point of view, it may not be necessary for the compositional theory to be complete, provided that all, or most, interesting properties can be derived. If the compositional theory is not complete, then the usefulness of this approach must be determined experimentally. 8.2.3 Improving STE Algorithms Although the STE-based algorithms presented here were shown to be effective, they are not capable of model checking all assertions. There are two major aspects that need research. • Enriching the antecedent So far the STE-based algorithms require that the antecedents be trajectory formulas. A l -though the use of the compositional theory ameliorates this restriction, it would be desir-able to support richer antecedents. The key question is how sets of sequences can effi-ciently be represented. Through the introduction of fresh boolean variables it is possible Chapter 8. Conclusion 189 to represent the union of two sets, thereby increasing the types of formulas for which rel-atively simple representations exist for their defining sequence sets. How efficiently can this be implemented? Axe there alternative representation techniques? • Supporting the infinite temporal operator At present there are no general algorithms for supporting the until operator and the derived infinite temporal operators. To do this requires not only an efficient way to represent a set of states, but also efficient methods of performing operations such as set union and com-parison. STE uses parametric representation of state, which allows extremely large state spaces to be represented. This representation does not yet support efficient set manipu-lation operations. Thus, an important research question is how these operations can be implemented efficiently. 8.2.4 Other Model Checking Algorithms This leads on to the question of whether model checking algorithms other than those based on symbolic trajectory evaluation would be effective. It appears that adapting the traditional BDD-style model checking algorithms such as those described in [26] to deal with partially-ordered state spaces would be possible. The logical framework developed here — Q, TL and the vari-ous satisfaction relations — would form the basis of such adaptation. The research question is how these model checking algorithms could be adapted to make use of partial information in an effective way. Particularly if extended to deal with non-determinism, an advantage of these model checking algorithms is that they would support model checking of properties requiring more expressive formulas than those of the style of verification supported by STE. Chapter 8. Conclusion 190 8.2.5 Tool Development The prototypes developed in the course of this research have showed that efficient, usable tools can be developed to support the compositional theory. The key components are supporting pow-erful, easy use of domain knowledge, and the provision of a flexible user interface through FL. Although the prototypes were successful, they were prototypes and contained a number of ad hoc features. Not only is a cleaner implementation required, but there are some issues which need further attention. • Forward or backward style of proof. The prototypes used the forward style of proof, whereas Seger's VossProver used the backward style of proof. While I believe that the forward style of proof is more appropriate for hardware verifications using this approach, the issue is not clear. • Incorporating new domain knowledge. The use of decision procedures and the incorpo-ration of domain knowledge in other ways (e.g. through decision procedures) is impor-tant. Standard packages for types such as bit vectors and integers must be provided, and it would be desirable to have a clean way for users to integrate new theories or extend old ones. • Partial automation of theorem proving. Although using STE for much of the verification alleviates much of the tedium traditionally associated with low-level of verification using theorem provers, it is desirable to automate as much as possible. The use of heuristics for finding time-shifts and specialisations needs to be extended. • Debugging facilities. When errors are detected it is important that meaningful error mes-sages be provided. One issue is relating higher-level concepts (e.g. an equation involving integers) to lower-level concepts (e.g. values on bit-valued nodes). Another issue is intel-ligent intervention when errors occur — determining what information is needed for the Chapter 8. Conclusion 191 user to correct the proof and presenting it in a meaningful way. This is a general lesson for verification systems [105]. Epilogue Verification is a central theoretical and practical problem of computer science, and much re-search is being done on different facets of the problem. Systems with very large state spaces pose a particular challenge for verification, especially when a detailed account of timing is important. For these types of state space, partial order representations can be very effective. The three major contributions of this thesis have been: • Developing a suitable theoretical framework for a temporal logic used to describe the be-haviour of finite state systems with lattice-structured state spaces; • Extending symbolic trajectory evaluation techniques to provide effective model checking for an important class of assertions about these systems; and • Developing and implementing a compositional theory for model checking, which allows the successful integration of theorem proving and automatic model checking approaches in a practical tool that can successfully verify large circuits. Bibliography [1] A. Arnold and S. Brlek. Automatic Verification of Properties in Transition Systems. Software — Practice and Experience, 25(6):579-596, June 1995. [2] M. Aagaard and C.-J.H. Seger. The Formal Verification of a Pipelined Double-Precision IEEE Floating-Point Multiplier. In ACM/IEEE International Conference on Computer-Aided Design, pages 7-10, November 1995. [3] M . Abadi and L. Lamport. Composing specifications. ACM Transactions on Program-ming Languages and Systems, 15(1):73—172, January 1993. [4] Advanced Micro Devices. PAL Device Handbook. Advanced Micro Devices, Inc., 1988. [5] H.R. Andersen, C. Stirling, and G. Winskel. A Compositional Proof System for the Modal //-calculus. In Proceedings of the 9th Annual Symposium on Logic in Computer Science, June 1994. [6] A. Aziz, T.R. Shiple, V. Singhal, and A.L. Sangiovanni-Vincentelli. Formula-Dependent Equivalence for Compositional CTL Model Checking. In Dill [48], pages 324-337. [7] R.C. Backhouse. Program Construction and Verification. Prentice-Hall, London, 1986. [8] D.L. Beatty. A Methodology for Formal Hardware Verification with Application to Mi-croprocessors. PhD thesis, Carnegie-Mellon University, School of Computer Science, 1993. [9] I. Beer, S. Ben-David, D. Geist, R. Gewirtzman, and M . Yoeli. Methodology and system for practical formal verification of reactive hardware. In Dill [48], pages 182-193. [10] N.D. Belnap. A useful four-valued logic. In J.M. Dunn and G. Epstein, editors, Modern Uses of Multiple Valued Logic. D. Reidel, Dordrecht, 1977. [11] S.A. Berezine. Model checking in //-calculus for distributed systems. Technical Paper, Department of Mathematics, Novosibirsk State University, 1994. [12] O. Bernholtz and O. Grumberg. Buy One, Get One Free !!! In Gabbay and Ohlbach [61], pages 210-224. [13] M.A. Bezem and J. Groote. A Correctness Proof of a One-bit Sliding Window Protocol in //CRC. The Computer Journal, 37(4):289-307, 1994. 192 Bibliography 193 [14] B. Boigelot and P. Wolper. Symbolic Verification with Periodic Sets. In Dill [48], pages 55-67. [15] T. Bolognesi and E. Brinksma. Introduction to the ISO Specification Language LOTOS. Computer Networks and ISDN Systems, 14:25-59, 1987. [16] S. Bose and A. Fisher. Automatic verification of synchronous circuits using symbolic logic simulation and temporal logic. In Claesen [33], pages 151-158. [17] R.S. Boyer and J.S. Moore. A Computational Logic Handbook. Academic Press, 1988. [18] J.C. Bradfield. A Proof Assistant for Symbolic Model-Checking. In G. von Bochmann and D.K. Probst, editors, CAV '92: Proceedings of the Fourth International Workshop on Computer Aided Verification, Lecture Notes in Computer Science 663, pages 316-329, Berlin, 1992. Springer-Verlag. [19] J.C. Bradfield. Verifying Temporal Properties of Systems. Birkhauser, Boston, 1992. [20] S.D. Brookes, C.A.R. Hoare, and A.W. Roscoe. A Theory of Communicating Sequential Processes. Journal of the Association for Computing Machinery, 31(3):560-599, July 1984. [21] R.E. Bryant. On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Application to Integer Multiplication. IEEE Transactions on Computers, 40(2):205-213, February 1991. [22] R.E. Bryant. Symbolic Boolean Manipulation with Ordered Binary-Decision Diagrams. ACM Computing Surveys, 24(3):293-318, September 1992. [23] R.E. Bryant, D.L. Beatty, and C.-J. H. Seger. Formal Hardware Verification by Symbolic Ternary Trajectory Evaluation. In Proceedings of the 28th ACM/IEEE Design Automa-tion Conference, pages 397^107. 1991. [24] R.E. Bryant and Y.-A. Chen. Verification of Arithmetic Functions with Binary Moment Diagrams. Technical Report CMU-CS-94-160, School of Computer Science, Carnegie Mellon University, May 1994. [25] R.E. Bryant and C.-J. H. Seger. Formal Verification of Digital Circuits by Sym-bolic Ternary System Models. In E.M Clarke and R.P Kurshan, editors, Proceedings of Computer-Aided Verification '90, pages 121-146. American Mathematical Society, 1991. [26] J.R Burch, E.M. Clarke, D.E. Long, K.L. McMillan, and D.L. Dill. Symbolic Model Checking for Sequential Circuit Verification. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 13(4):401-424, April 1994. Bibliography 194 [27] J.R. Burch, E.M. Clarke, and K.L. McMillan. Symbolic Model Checking: 102° States and Beyond. Information and Computation, 98(2): 142-170, June 1992. [28] J.R. Burch and D.L. Dill. Automatic Verification of Pipelined Microprocessor Control. In Dill [48], pages 68-80. [29] O. BurkartandB. Steffen. Model Checking for Context-Free Processes. In W.R. Cleave-land, editor, CONCUR '92: Proceedings of the Third International Conference on Con-currency Theory, Lecture Notes in Computer Science 630, pages 123-137, Berlin, 1992. Springer-Verlag. [30] C.A.J, van Eijk and G.L.J.M. Janssen. Exploiting Structural Similarities in a BDD-based Verification Method. In Kumar and Kropf [92], pages 110-125. [31] L. Carroll. Through the Looking Glass and what Alice saw there. Macmillan and Co., London, 1894. [32] S. Christensen, H. Huttel, and C. Stirling. Bisimulation Equivalence is Decidable for all Context-Free Processes. Technical Report ECS-LFCS-92-218, Laboratory for Founda-tions of Computer Science, Department of Computer Science, University of Edinburgh, June 1992. [33] L.J.M. Claesen, editor. Proceedings of the IFIP WG 10.2/WG 10.5 International Work-shop on Applied Formal Methods for Correct VLSI Design (November 1989), Amster-dam, 1990. North-Holland. [34] E. Clarke, M . Fujita, and X. Zhao. Hybrid Decision Diagrams: Overcoming the Limita-tions of MTBDDs and BMDs. Technical Report CMU-CS-95-159, School of Computer Science, Carnegie Mellon University, April 1995. [35] E. Clarke and X. Zhao. Word Level Symbolic Model Checking: A New Approach for Verifying Arithmetic Circuits. Technical Report CMU-CS-95-161, School of Computer Science, Carnegie Mellon University, May 1995. [36] E.M. Clarke, E.A. Emerson, and A.P. Sistla. Automatic Verification of Finite-State Con-current Systems Using Temporal Logic Specifications. ACM Transactions on Program-ming Languages and Systems, 8(2):244—263, April 1986. [37] E.M. Clarke, T. Filkorn, and S. Jha. Exploiting symmetry in temporal logic model check-ing. In Courcoubetis [45], pages 450-462. [38] E.M. Clarke, O. Grumberg, and K. Hamaguchi. Another Look at LTL Model Checking. In Dill [48], pages 415-427. Bibliography 195 [39] E.M. Clarke, O. Grumberg, and D.E. Long. Model Checking and Abstraction. ACM Transactions on Programming Languages and Systems, 16(5), September 1994. [40] E.M. Clarke, D.E. Long, and K.L. McMillan. Compositional Model Checking. In IEEE Fourth Annual Symposium on Logic in Computer Science, Washington, D.C., 1989. IEEE Computer Society. [41] R. Cleaveland, J. Parrow, and B. Steffen. The Concurrency Workbench: A Semantics-Based Tool for the Verification of Concurrent Systems. ACM Transactions on Program-ming Languages and Systems, 15(1):36—72, January 1993. [42] A. Cohn. The Notion of Proof in Hardware Verification. Journal of Automated Reason-ing, 5(2): 127-139, June 1989. [43] O. Coudert, C. Berthet, and J.C. Madre. Verification of Sequential Machines Using Boolean Functional Vectors. InClaesen [33], pages 179-195. [44] O. Coudert and J.C. Madre. The Implicit Set Paradigm: A New Approach to Finite State System Verification. Formal Methods in System Design, 6(2): 133-145, March 1995. [45] C. Courcoubetis, editor. Proceedings of the 5th International Conference on Computer-Aided Verification, Lecture Notes in Computer Science 697, Berlin, July 1993. Springer-Verlag. [46] B. Cousin and J. Helary. Performance Improvements of State Space Exploration by Reg-ular and Differential Hashing Functions. In Dill [48], pages 364-376. [47] M. Darwish. Formal Verification of a 32-Bit Pipelined RISC Processor. MASc Thesis, University of British Columbia, Department of Electrical Engineering, 1994. [48] D.I. Dill, editor. CAV '94: Proceedings of the Sixth International Conference on Com-puter Aided Verification, Lecture Notes in Computer Science 818, Berlin, June 1994. Springer- Verlag. [49] J. Dingel and T. Filkorn. Model checking for infinite state systems using data abstraction, assumption-commitment style reasoning and theorem proving. In Wolper [128], pages 55-69. [50] M . Donat. Verification Using Abstract Domains. Unpublished paper, Department of Computer Science, University of British Columbia, April 1993. [51] A. Conan Doyle. The Adventures of Sherlock Holmes. Oxford University Press, Oxford, 1993. Bibliography 196 [52] E.A. Emerson. Temporal and Modal Logic. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B (Formal Models and Semantics), chapter 16, pages 995-1072. Elsevier, Amsterdam, 1990. [53] E.A. Emerson and J.Y. Halpern. "Sometimes" and "Not Never" Revisited. Journal of the Association for Computing Machinery, 33(1): 151-178, January 1986. [54] R. Enders, T. Filkorn, and D. Taubner. Generating BDDs for Symbolic Model Check-ing in CCS. In K.G. Larsen and A. Skou, editors, CAV '91: Proceedings of the Third International Conference on Computer Aided Verification, Lecture Notes in Computer Science 571, pages 203-213, Berlin, June 1991. Springer-Verlag. [55] J. Esparza. On the decidability of model checking for several /^-calculi and Petri nets. Technical Report ECS-LFCS-93-274, Laboratory for Foundations of Computer Science, Department of Computer Science, University of Edinburgh, July 1993. [56] J.C. Fernandez, A. Kerbrat, and L. Mounier. Symbolic Equivalence Checking. In Cour-coubetis [45], pages 85-96. [57] M . Fisher and R. Owens. An Introduction to Executable Modal and Temporal Logics. In M . Fisher and R. Owens, editors, IJCAP 93 Workshop: Executable Modal and Temporal Logics, Lecture Notes in Artificial Intelligence 897, pages 1-39, Berlin, Aug 1993. [58] M . Fitting. Bilattices and the Theory of Truth. Journal of Philosophical Logic, 18(3):225-256, August 1989. [59] M. Fitting. Bilattices and the Semantics of Logic Programming. The Journal of Logic Programming, 11(2):91—116, August 1991. [60] K. Frenkel. An Interview with Robin Milner. Communications of the ACM, 36(1 ):90-97, January 1993. [61] D.M. Gabbay and H.J. Ohlbach, editors. ICTU 94: Proceedings of the First Inter-national Conference on Temporal Logic, Lecture Notes in Artificial Intelligence 827, Berlin, Aug 1994. [62] A. Galton, editor. Temporal Logics and their Applications. Academic Press, London, 1987. [63] M.R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the Theory of NF'-Completeness. W.H. Freeman and Company, New York, 1979. [64] Globe and Mail. Intel finds 'subtle flaw' in chip. The Globe and Mail, 25 November 1994, p. B16, November 1994. Bibliography 197 [65] Globe and Mail. Intel takes charge. The Globe and Mail, 18 January 1995, p. B2, Febru-ary 1995. [66] D.Goldberg. Computer arithmetic. In Computer Architecture: a quantitative approach by J.L. Hennessy and D.A. Patterson, chapter Appendix A, pages A0-A65. Morgan Kaufmann, San Mateo, California, 1990. [67] M. J.C. Gordon. Programming Language Theory and its Implementation. Prentice-Hall, London, 1988. [68] M.J.C. Gordon, editor. Introduction to HOL: a theorem proving environment for higher order logic. Cambridge University Press, Cambridge, 1993. [69] M.J.C. Gordon, CP. Wadsworth, and R. Milner. Edinburgh LCF : a mechanised logic of computation. Lecture Notes in Computer Science 78. Springer-Verlag, Berlin, 1979. [70] S. Graf and C. Loiseaux. A tool for symbolic program verification and abstraction. In Courcoubetis [45], pages 71-84. [71] O. Grumberg and R.P. Kurhsan. How Linear Can Branching-time Be? In Gabbay and Ohlbach [61], pages 180-194. [72] O. Grumberg and D.E. Long. Model Checking and Modular Verification. ACM Trans-actions on Programming Languages and Systems, 16(3):843-871, May 1994. [73] A. Gupta. Formal hardware verification methods: A survey. Formal Methods in System Design, 1(2/3): 151-238, October 1992. [74] N.A. Harman and J.V Tucker. Algebraic models and the correctness of microprocessors. In Milne and Pierre [101], pages 92-108. [75] J. Harrison. Binary decision diagrams as a HOL derived rule. The Computer Journal, 38(2): 162-170, 1995. [76] S. Hazelhurst and C.-J. H. Seger. A Simple Theorem Prover Based on Symbolic Tra-jectory Evaluation and OBDDs. Technical Report 93-41, Department of Computer Sci-ence, University of British Columbia, November 1993. Available by anonymous ftp as ftp://ftp.cs.ubc.ca/pub/local/techreports/1993/TR:93-41.ps.gz. [77] S. Hazelhurst and C.-J. H. Seger. Composing symbolic trajectory evaluation results. In Dill [48], pages 273-285. [78] S. Hazelhurst and C.-J.H. Seger. A Simple Theorem Prover Based on Symbolic Tra-jectory Evaluation and BDD's. IEEE Transactions on Computer-Aided Design of Inte-grated Circuits and Systems, 14(4):413^122, April 1995. Bibliography 198 [79] P. Heath. The Philosopher's Alice. Academy Editions, London, 1974. [80] M . Hennessy. Algebraic Theory of Processes. MIT Press, Cambridge, MA., 1988. [81] M . Hennessy and R. Milner. Algebraic Laws for Non-determinism and Concurrency. Journal of the Association for Computing Machinery, 32(1): 137-161, January 1985. [82] C.A.R. Hoare. An Axiomatic Basis for Computer Programming. Communications of the ACM, 12(10):576-580, 583, October 1969. [83] R. Hojati and R.K. Brayton. Automatic Datapath Abstraction in Hardware Systems. In Wolper [128], pages 98-113. [84] H. Hungar. Combining Model Checking and Theorem Proving to Verify Parallel Pro-cesses. In Courcoubetis [45], pages 154-165. [85] H. Hungar and B. Steffen. Local Model Checking for Context Free Processes. In A. Lin-gas, R. Karlsson, and S. Carlsson, editors, Proceedings of the 20th International Collo-quium on Automata, Languages and Programming, Lecture Notes in Computer Science 700, pages 593-605, Berlin, July 1993. Springer-Verlag. [86] W.A.Hunt. FM8501: A Verified Microprocessor. Lecture Notes in Artificial Intelligence 795. Springer-Verlag, Berlin, 1994. [87] P. Jain and G. Gopalakrishnan. Efficient symbolic simulation-based verification using the parametric form of boolean expressions. IEEE Transactions on Computer-Aided De-sign of Integrated Circuits and Systems, 13(8): 1005-1015, August 1994. [88] S.D. Johnson, P.S. Miner, and A. Camilleri. Studies of the Single Pulser in Various Rea-soning Systems. In Kumar and Kropf [92], pages 126-145. [89] B. Jonsson. Compositional specification and verification of distributed systems. ACM Transactions on Programming Languages and Systems, 16(2):259-303, March 1994. [90] J.J. Joyce and C.-J.H. Seger. Linking BDD-based Symbolic Evaluation to Interactive Theorem-Proving. In Proceedings of the 30th Design Automation Conference. IEEE Computer Society Press, June 1993. [91] T. Kropf. Benchmark-Circuits for Hardware-Verification. In Kumar and Kropf [92], pages 1-12. [92] R. Kumar and T. Kropf, editors. TPCD'94: Proceedings of the Second International Conference on Theorem Provers in Circuit Design, Lecture Notes in Computer Science 901, Berlin, September 1994. Springer-Verlag. Bibliography 199 [93] R.P Kurshan and L. Lamport. Verification of a Multiplier: 64 Bits and Beyond. In Cour-coubetis [45], pages 166-179. [94] L. Lamport. The Temporal Logic of Actions. ACM Transactions on Programming Lan-guages and Systems, 16(3), May 1994. [95] D.E. Long. Model Checking, Abstraction, and Compositional Verification. PhD thesis, Carnegie-Mellon University, School of Computer Science, July 1993. Technical report CMU-CS-93-178. [96] K. Marzullo, KB. Schneider, and J. Dehn. Refinement for Fault-Tolerance: An Aircraft Hand-off Protocol. Technical Report 94-1417, Department of Computer Science, Cor-nell University, April 1994. [97] M.C. McFarland. Formal Verification of Sequential Hardware: A Tutorial. IEEE Trans-actions on Computer-Aided Design of Integrated Circuits and Systems, 12(5):633-654, May 1993. [98] K.L. McMillan. Symbolic Model Checking: An Approach to the State Explosion Prob-lem. PhD thesis, Carnegie-Mellon University, School of Computer Science, 1993. [99] K.L. McMillan. Hierarchical representations of discrete functions, with applications to model checking. In Dill [48], pages 41-54. [100] C. Mead and L. Conway. Introduction to VLSI Design. Addison-Wesley, Reading, Mas-sachusetts, 1980. [101] G.J. Milne and L. Pierre, editors. CHARME '93: IFIP WG10.2 Advanced Research Working Conference on Correct Hardware Design and Verification Methods, Lecture Notes in Computer Science 683, Berlin, May 1993. Springer-Verlag. [102] R. Milner. Communication and Concurrency. Prentice-Hall International, London, 1989. [103] F. Moller and S.A. Smolka. On the Computational Complexity of Bisimulation. ACM Computing Surveys, 27(2):287-289, June 1995. [104] New Scientist. Flawed chips bug angry users. New Scientist, (1955): 18, December 1994. [105] S. Owre, J. Rushby, N. Shankar, andF. vonHenke. Formal Verification for Fault-Tolerant Architectures: Prolegomena to the Design of PVS. IEEE Transactions on Software En-gineering, 21(2): 107-125, February 1995. Bibliography 200 [106] S. Owre, J.M. Rushby, and N. Shankar. PVS: A Prototype Verification System. In D. Ka-pur, editor, Proceedings of the 11th International Conference on Automated Deduction — CADE-11, Lecture Notes in Computer Science 607, pages 748-752, Berlin, March 1992. Springer-Verlag. [107] L.C.Paulson. ML forthe working programmer. Cambridge University Press, New York, 1991. [108] L. Pierre. VHDL Description and Formal Verification of Systolic Multipliers. In D. Ag-new, L. Claesen, and R. Camposano, editors, Computer hardware description languages and their applications : proceedings of the 11th IFIP WG10.2 International Conference on Computer Hardware Description Languages and their Applications - CHDL'93, num-ber 32 in IFIP Transactions A, pages 225-242, Berlin, 1993. Springer-Verlag. [109] L. Pierre. An automatic generalization method for the inductive proof of replicated and parallel structures. In Kumar and Kropf [92], pages 72-91. [110] D.Price. Pentium FDIV Flaw - lessons learned. IEEE Micro, 15(12):88-86, April 1995. [Ill] S. Rajan, N. Shankar, and M.K. Srivas. An Integration of Model Checking with Auto-mated Proof Checking. In Wolper [128], pages 84-97. [112] J.M. Rushby and F. von Henke. Formal Verification Algorithms for Critical Systems. IEEE Transactions on Software Engineering, 19(1), January 1993. [113] M. Ryan and M . Sadler. Valuation Systems and Consequence Relations. InS. Abramsky, D.M. Gabbay, and T.S. Maibaum, editors, Handbook of Logic in Computer Science, vol-ume 1 (Background: Mathematical Structures), chapter 1, pages 1-78. Clarendon Press, Oxford, 1992. [114] T. Ralston S. Gerhart, D. Craigen. Experience with Formal Methods in Critical Systems. IEEE Software, ll(l):21-28, June 1994. [115] C.-J.H. Seger. Voss — A Formal Hardware Verification System User's Guide. Technical Report 93-45, Department of Computer Science, Univer-sity of British Columbia, November 1993. Available by anonymous ftp as ftp://ftp.cs.ubc.ca/pub/local/techreports/1993/TR-93-45.ps.gz. [116] C.-J.H. Seger and R.E. Bryant. Formal Verification by Symbolic Evaluation of Partially-Ordered Trajectories. Formal Methods in Systems Design, 6:147-189, March 1995. [117] C.-J.H. Seger and JJ. Joyce. A Mathematically Precise Two-Level Hardware Verifica-tion Methodology. Technical Report 92-34, Department of Computer Science, Univer-sity of British Columbia, December 1992. Bibliography 201 [118] H. Simonis. Formal verification of multipliers. In Claesen [33], pages 267-286. [119] V. Stavridou. Formal methods and VLSI engineering practice. The Computer Journal, 37(2), 1994. [120] C. Stirling. Modal and temporal logics. In S. Abramsky, D.M. Gabbay, and T.S. Maibaum, editors, Handbook of Logic in Computer Science, volume 2 (Background: Computational Structures), pages 477-563. Clarendon Press, Oxford, 1992. [121] C.Stirling. Modal and Temporal Logics for Processes. Technical Report ECS-LFCS-92-221, Laboratory for Foundations of Computer Science, Department of Computer Sci-ence, University of Edinburgh, June 1992. [122] C. Stirling and D. Walker. Local model checking in the modal mu-calculus. Theoretical Computer Science, 89(1): 161-177, October 1991. [123] PA. Subrahmanyam. Towards Verifying Large(r) Systems: A strategy and an experi-ment. In Milne and Pierre [101], pages 135-154. [124] A.M. Turing. On computable numbers, with an application to the Entscheidungsprob-lem. Proceedings of the London Mathematical Society (Second Series), 42:230-265, 1937. [125] A. Visser. Four Valued Semantics and the Liar. Journal of Philosophical Logic, 13(2):181-212,May 1984. [126] O. Wilde. The Importance of being Earnest: a trivial comedy for serious people. Van-guard Press, New York, 1987. [127] G. Winskel. A note on model checking the modal z/-calculus. In G. Ausiello, M . Dezani-Ciancagnlini, and S. Ronchi Delia Rocca, editors, Proceedings of the 16th International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science 372, pages 761-771, Berlin, 1989. Springer-Verlag. [128] P. Wolper, editor. CAV '95: Proceedings of the Seventh International Conference on Computer Aided Verification, Lecture Notes in Computer Science 939, Berlin, July 1995. Springer-Verlag. [129] Z. Zhu. A Compositional Circuit Model and Verification by Composition. In Kumar and Kropf [92], pages 92-109. [130] Z. Zhu and C.-J.H. Seger. The Completeness of a Hardware Inference System. In Dill [48], pages 286-298. Appendix A Proofs A . l Proof of Properties of TL A.1.1 Proof of Lemma 3.3 Lemma A . l (Lemma 3.3). If g, h: S —>• Q are simple, then D(g) = D(h) implies that V s £ S,g(s) = h(s). Proof. To emphasise that D(g) = D(h), we set D = D(g). Let s £ 5. Let E = {q € Q : (sq, q) £ D A sq C 5} and let e = UE. The proof first shows that p(s) = e. (0)5(5) Z< e 0 is simple. Definition of defining pair. Definition of £ , (1), (2). Definition of join. (1) 3sp £ S 9 (sp,g(s)) £ £> (2) 5 P C s (3) < 7 ( s ) £ F , (4) flf(5) ^  U£ (b)e^g(s) (1) V K ) ? ) € D , g ^ W (2) 0(5) is an upperbound of E (3) UE^g(s) Thus 0(5) = e. Similarly, /i(s) = e. Therefore, g(s) = h(s). As s was arbitrary V s £ S,g(s) = h(s) q = g(sq),sq C 5, 0 is monotone. (1) Property of join. • Note that the proof does not rely on the particular structure of Q; it only relies on Q being 202 Appendix A. Proofs 203 a complete lattice. A.1.2 Proof of Theorem 3.5 The idea behind the proof is to partition the domain of an arbitrary p : S —>• Q depending on the value of p(s). Then, we construct a function which enables us to determine which partition an element falls in. We can now use this information in reverse — once we know which partition an element falls into, we can return the value of the function for that element. The complication of the proof is to use the properties of Q to combine all this information together. As an analogy suppose that g : S —> {—10,10}. Suppose we know that g~(s) = 1 if g(s) = —10 and g~(s) = 0 otherwise; and g+(s) = 1 if g(s) = 10 and g+(s) = 0 otherwise. Then we can write g(s) = — lOg^(s) + 10g+(s). The two steps in doing this were to find the and g+ functions, and then to determine how to combine them. The proof of Theorem 3.5 follows a similar pattern: first the functions that are the equivalent of g- and g+ are given; after that it is shown that these functions can be combined to simulate p, and that they can be constructed from simple predicates. The functions given in the next definition are the analogues to the g_ and g+ functions. Definition A . l . Suppose that we have an arbitrary monotonic function p : S —>• Q. Define the following: ( T 3S e S,s C u,p(s) = f t 3s G <S, s C u,p(s) = t J_ otherwise. T 3s G <5, s C u, p(s) = T t otherwise. t otherwise. • The proof of Theorem 3.5 comes in two parts: first, Lemma A.2 demonstrates how to combine Appendix A. Proofs 204 the Xq functions to construct a function p' that is equivalent to p; and Theorem A.3 concludes by showing that the functions Xf» Xt and X T can be defined from the simple predicates using TL operators. Lemma A.2. Let p : S -» Q be a monotonic predicate. Define p'(u) by P ' ( « ) = Xt(«) Axf(w) A X T ( « ) V ->XT(U)-Then, Vs € <S,p(5) = (a) Suppose p(u) =J_ (1) Xf(«) = XT(W) = t,xt(u) =_L (2) p'(u) = t A 1 At V f = J_= p(u) By definition and monotonicity of p. (1) Suppose p(u) = f (1) Xf(u) = T , Xt(u) — -U XT(«) = t By definition and monotonicity of p. (2) j/(u ) = l A T A t V f = f = p(u) (1) (c) Suppose p(u) = t (1) Xt{u) = t , X f ( « ) = X T ( « ) = t . (2) p'(u) = t A t A t V f = t = p(u) By definition and monotonicity of p. (1) Appendix A. Proofs 205 (d) Suppose p(T) = T (1) XT{U) = T, _L 7±Xt(u), By definition and monotonicity of p. t=<Xf(«) (2) p'(u) y± At A T V T (1) and monotonicity of A and V. = f V T = T = P(u) (3) p'(u) = p(u). Since T = UQ. • The final part of the proof is to show that the Xq functions can be constructed from the simple predicates. Theorem A.3 (Theorem 3.5). For all monotonic predicates p : S —> Q, 3p' € TL such that Vs £ <S, p(s) = p'(s). Proof. Partition S according to the value of p: S± = {s G S : p(s) =1} 5 f = {s € 5 : p(s) = f} 5 t = { s € 5 : p(s) = t} • 5 T = { s e 5 : p(s) = T} . Some of these sets may be empty. Now, for each s £ S we define x's '• $ ~> Q (each x's char-acterises all elements at least as big as s) as follows: {t s rz* Note that each x's is simple. For the purpose of this lemma, define V 0 =1. The x' have the following two properties. Appendix A. Proofs 206 Suppose 3s G S 9 s fZ u,p(s) = g/or s o m e g G Q . (1) ^ ( u ) = t Definition of x'-(2) V {x'v(u) v e Sq} = t s £ Sq by supposition. Suppose /Js G <S" 9 s fZ i/,p(s) = q for some q €. Q. (1) Vv £ Sq,v g.u Supposition. (2) V {x«(u) : u ^ ^ 9 } = - L Either 5g is empty or follows from (1). Now, define: Xf{u) = -.( V { * ' » : s G Sf}) V T Xt(«) = V {x'» : s G St} XT(«) = - ( V { X » : seST}) V T Using the properties of x' proved above, we have that: _T 3s G <S,3 [Z u,p(s) = f Xf(«) t otherwise. t 3s G S, s fZ u,p(s) = t • Xt(«) I J_ otherwise. T 3s G <S", s rz u, p(s) = T XT(«) = j t otherwise. Note that we have constructed from simple predicates the functions Xs given in Definition A. 1. Thus, by Lemma A.2, given an arbitrary monotonic predicate p, we are able to define it from simple predicates using conjunction, disjunction and negation - showing we can consider any monotonic state predicate as a short-hand for a formula of TL. • Appendix A. Proofs 207 A.1.3 Proof of Lemma 3.6 Lemma A.4 (Lemma 3.6). 1. Commutativity: 01 A 02 = 02 A C/i, 0 l V 02 = 02 V 0 i . 2. Associativity: (gi V 02) V 03 = 0i V (02 V 03), (01 A 02) A 03 = 0i A(02 A 03) 3. De Morgan's Law: 0 i A 0 2 = -"(-"01 V - i f l b ) , gi V 02 = -"(-"01 A - " f l T 2 ) . 4. Distributivity of A and V : hA(givg2) = (hA9l) V (/iA0 2), hv(giAg2) = (h V 9l) A(h V g2) 5. Distributivity o / N e x t : N e x t ( 0 i A 0 2 ) = ( N e x t 0i) A ( N e x t 02), N e x t (01 V 02) = ( N e x t 0X) V ( N e x t 02). 6. Identity: 0 V C f = 0, 0 A C t = 0. 7. Double negation: - , _ , 0 = 0 Proo/ The proofs all rely on the application of the definition of satisfaction and the properties of Q. Let a € Sw be given. 1. Follows from the commutativity of Q. 2. Follows from the associativity of Q. 3. Sat(a, 01 A0 2 ) = Sat(a, gi) A SanV,g2) = ->(->Sat((T,gi) V -^Sat(a,g2)) = -<Sat{<T,->g1 V ->02) = 5ar((7,-"(-"0i V ->02)). Similarly, 0! V 02 = ->(-"0i A->02)-Appendix A. Proofs 208 4. Follows from the distributivity of Q. 5. Sat( cr,Next(g1Ag2)) = Sat(a>i,gx A g2) - Sat(o->i,gi) !\Sat(a>i,g2). = Sat(cr, N e x t g\) A Sat(a, Nextg 2 ) . = Sat(a, ( N e x t gt) A ( N e x t g2)). The proof for disjunction is similar. 6. Follows since t is the identity for A with respect to Q and f is the identity for V with respect to Q. 7. Sat(a,->-ig) = -^Sat(a,->g) = ->^Sat(cr,g) — Sat(a,g) A.1.4 Proof of Lemma 3.7 Lemma A.5 (Lemma 3.7). If p is a simple predicate over Cn, then there is a predicate gv £ TL„ such that p = gp. Consider (sg,q) G D(p), and suppose that p(u) = q. Then, since p is simple, for all i = 1,... ,n, s,[j]Cw[!]. What we will do is construct functions that enable us to check whether for alii = 1,... , n, sq [i] C u [i]. This will be enough of a building block to complete the proof. We define the functions x'q s o that x'q(h v) = t if sq[i] C v[i] and x'q(h v) =-L if sq[i] % VV\-Formally, the x'fi are defined as: Since a is arbitrary the result follows. • Proof. J_ V ->[i] when sq[i] = L. x'q(hv) = < j _ v [t] when sq[i] = H. J_ V (->[i] A[i]) when sq[i] = Z. Appendix A. Proofs 209 Informally, this means that x[q, v) indicates whether v is greater than the z-th component of p's defining value for q. Extending this, we get that A"= 1(x' g(«, v)) returns t if sq rz v and returns J_ otherwise. Extend this further by defining: Xf(s) = AMXfihs))) V T , if 3(a,, f) G D(p) = _L, otherwise. Xt{s) = A (x'tihs))i if3{st,t)eD{p) i=\ =_L, otherwise. XTW = -(.A(XT(M)))VT, i f3( S T , f)ei>(p) =J_, otherwise. Considerxt- Suppose 3s such mats fZ uandp(s) = t. Since pis simple, st E -s- By transitivity s t E u . Hence by the remarks above, xt(v) = t. On the other hand, if /3s such that s (Z u and p(s) = t, then either t is not in the range of p or s t £ u . In either case Xt(u) =-L. For the case of q = f, T , suppose 3s such that s rz v and p(s) = g. Since p is simple, s 7 C s. By transitivity sq FZ u . Hence by the remarks above, Xq(v) = T . On the other hand, if /3s such that s rz u and p(s) = g, then either q is not in the range of p or s 9 u . In either case Xq(v) = t. This implies that the definitions given of Xq here are equivalent to those of Definition A . l , and thus we can apply Lemma A.2. As the xq are constructed here as formulas of T L n , the proof is complete. • A.2 Proofs of Properties of STE This section contains proofs of theorems and lemmas stated in Chapter 4. Proof of Lemma 4.3 First, an auxiliary result. Appendix A. Proofs 210 Lemma A.6. If g G TL,cjr = f,t,<y G A%),theng r< Sat(8,g). Proof. Let g G TL, 5 = s0sis2 . . . G A ?(g). Proof by structural induction. If g is simple (base case of induction): (1) Either (s0, ?) G £>(#) or (s0, T ) G D(c/) Definition of A 9 (2) Saf(5,0) G {9, T } Definition of Sat. (3) 9 X 5ar((5, g) Definition of < . Letg = gi A g2. (1) Sat(S,g) = Sat(8,gi) A Sat(8,g2) Suppose q = t, i.e. c> G A^g) . (2a) 381 G A'(flfi), cT2 G A"(g2) 9 £ = U 82 (3a) qdiSat(81,gl),q^Sat(Si,g2) (4a) g X Sar(c>,£1), <? ^ Saf(<S,g2) (5a) c7^5ar(5,0) Suppose q = f, i.e. 5 G A f (0). (2b) Either (or both) 8 G A % i ) or 8 G A»($r 2) Suppose (without loss of generality) that 8 G A 9(#i). (3b) f ^ f l ^ , 5 i ) (4b) Trivially,! ^Sat(82,g2) (5b) f A l ^ Sat(8\gi) A Sat(82,g2) = Sat(8,g). (6b) But f A -L= f which concludes the proof. By definition. Construction of A Inductive assumption. Monotonicity. (1), (4a), monotonicity of A. Construction of A Inductive assumption. (3b), (4b) Appendix A. Proofs Letg = ->0i. (1) SeA-i(gi) (2) ^q±Sat{8,gi) (3) -iq±^Sat(6,g) (4) Hence q ^ Sat(8, g). Construction of A. Inductive assumption. Sat(8,g) = -iSat(6,gi). Lemma 3.1(2). Let g = Next g\. (1) s0 = X a n d s l 5 2 . . . e A"(gi) (2) q ^  Sat(siS2 . . . ,gi) (3) q ^  Sar(XsxS2 . . . ,Next5f!) (4) q±Sat(8,g) Construction of A. Inductive assumption. From (2), definition of Sat. Monotonicity of Sat. Suppose g = gi Untilg 2. By definition, Saf(cr,flri Until02) = W^l0(Sat(a>0,gi) A . . . A Sat(a>i-!,gi) A Sat(a>i,g2 Let S G A?(g) be given. Suppose q — t, i.e. 8 G A*(<7i Untilg 2) (1) 3i 9 5 G A^NextV) II . . . II A^Next^'" 1)^) II A^Next^) Construction of A (2) Vj = 0,... , i, 38j G A^Next^i) such that U{Sj : j = 0,... , i} = 8 Definition of II. (3) SiQ 8, j = 0,... , i (2), property of join Appendix A. Proofs 212 (4) t ^ Sat{8\ Next-7^), j = 0,... , i — 1 Inductive assumption. (5) t •< Sat(8, Next^i) (4), monotonicity. (6) Similarly t ^ Sar(£, Next'#2) (7) t X 5a<cT, (NextV) A . . . A (Next^1)^) A (Next^2)) (5) and (6). (8) t ^ Sat(S, gi Until g2) Definition of Sat. Suppose q — f, i.e. 8 G A f(^! Untilg 2). (1) Vi = 0,... , 3<? with 8{ C 5 and <£*' G A f (NextV) U .. . U A^Next^- 1'^) U A f(Next^2) Construction of A. (2) Vi = 0,... ,8i G A f (Next0gi A . . . A Next( i-1'gi) A NextJg2 Definition of A f (3) f X 5ar(cT8', Next°c/i A . . . A Next(i_1'#i A NextJ#2) Inductive assumption. (4) f <Sat( 8, gi Unt i 1 g2) Definition of gi Unt i 1 g2. Lemma A.7 (Lemma 4.3). Let g G TL, and let a G Sw. For q = t,f,q< Sat(a, g) iff 38g G A9(fir) with 89 C cr. Proo/ (=^) Assume that g X Sa^a, g). The proof is by structural induction. Suppose g is simple (base case of induction). (1) q ^  g{o-0) and 3c/ G {<?, T } with (sg/, q') G £>(#) and v C cr0 <7 is simple. (2) s , ,XX. . . Ecr From(l). (3) But sq,XX... G A%) Definition of Aq(g). Suppose g = gihg2. Appendix A. Proofs 213 Suppose q = t. (1) t •< Sat(a, gi), t -< Sat(a,g2) (2) 3S1 G A ' ( » , 62 e A 4 ^ ) with 8\82Qa (3) Let cT = S1 U cT2 (4) 8 C (5) J e A ' f o ) Suppose q = f. (1) Either (or both) f ^ SanV, #i) or f ^ Sar(cr, 52) Without loss of generality assume f z< Sat(a, gi). (2) 381 e Af(gi) with S1 Co-(3) ^ A f ( j ) Suppose g = ->g\. (1) c j rE -«&tf (0 - , f l r i ) (2) ~-g^5flf(cr,c;i) (3) 3<JG A ^ ( f l r i ) 9<JCcr (4) <5eA%) Suppose g = Next gi. (1) qdSat(a>!,gi) (2) a a e A ' t e O B J C a M (3) X c T G A % ) (4) X £ C c r Suppose g = gi U n t i l g 2 . Lemma 3.2(2). Inductive assumption. From (2). Definition of A(g) Lemma 3.2(3) Inductive assumption. Definition of A (g). Definition of Sat Lemma 3.1. Inductive assumption. Definition of A(g). Definition of Sat. Inductive assumption. Construction of A(g). X C cr0 Appendix A. Proofs 2 1 4 Suppose q = t. 3i 3 t r< 5ar(<r,NextV) A ... A Sat(a, Next ( , ' - 1 ) 0i) A Sa^cr, Next l02) Lemma 3.2(1) t z< Sat(a, Next*flf2) and From (1), Lemma 3 . 2 ( 2 ) . Vj = 0,... , i - 1, t ^ Sat(a, Next j0i) 3<? € At(Next*flf2) 3 8{ Qa Inductive assumption Vj = 0,... , i - 1, 38j G A^Next^i) 3 83 Qa Let 5 = 8° U . . . U <J«' 5 G A ^ N e x t V ) E . . . II A^Next**'-1)^) II A ^ N e x t ^ ) Construction of A SQa (3) and (4) . 8 G A*(0i U n t i l g2) (5), construction of A . Suppose q = f. V*,f ^ 5ar(a, Next V ) A ... A Sat(a, Next ( l _ 1 ) 0i) A 5a/(<r, Next{02) Lemma 3 .2(4) Either f X 5a?(cr, Next*$f2) or Bj € 0,.. . , i - 1 3 f r< Sar(a, Next J0i). Lemma 3 .2(3) Either 38'1 3 S>1 e A f (Next!02) with 8'% • cr, or Inductive assumption. 3<P e A f (Next^) and 83 C a. In either case, Vi, by construction 38{ E A f (Next°(/i) U . . . U A f (Next^-1)^) U A f (Next'flr2) with 8i C cr Let 5 = Ll£0<$\ 8 e A f (01 Until02) Construction of A f . C cr <S is a complete lattice. Appendix A. Proofs 215 (<=) Let g G TL, a G Sw, and assume that 35 s G A«($r) such that 5 s C a. By Lemma A.6, g ^  Sat(S9, g). By the monotonicity of Sar, <? ^ Sa?(cr, g). • A.2.1 Proof of Lemma 4.4 Lemma A.8 (Lemma 4.4). Let g G TL, and let a be a trajectory. For q = t, f, q ^  Sar(<7, g) if and only if 3 r s G ^(g) with r 9 C cr. Proo/ (=>) Suppose g X Sat(a, g). By Lemma 4.3, 35 5 G A % ) such that 5 s C cr. Let T9 = T(59). Note that T9 G T % ) by construction and that 89 C r9. T9 C cr: the proof is by induction. 1- = <Jg T cr 0. 2. Assume T / • cr4-. 3. Since a is a trajectory, E Ct '+l *f+l E C r i + i ^ i = ^ i U Y ( 7 f ) E CTj+l Monotonicity of Y cr is a trajectory. Since S9 • cr. Definition of T9. Property of join. («=) Suppose 3T 5 G T % ) such that T9 • CT. As G Tq(g), 369 G A % ) such that 59\ZT9. By transitivity, S9 C cr. By Lemma4!3, <? X Sat(69,g). By monotonicity g X 5ar(cr, #). • Appendix A. Proofs 216 A.2.2 Proof of Theorem 4.5 Theorem A.9 (Theorem 4.5). If g and h are TL formulas, then Al(h) Qv T\g) if and only if g=$>h. • Proof. (=>•) Recalling the definition of ==§> on page 71, suppose V r 3 e Tb(g), 38h e A*(fc) w i t h ^ C r 3 . Suppose t X Sat(a, g). By Lemma 4.4, 3r9 e T % ) such that r5 C cr. By assumption then, 3Sh G A 9 (/i) , with Sh Q T 3 . By transitivity, C cr. By Lemma 4.3, g ^  Sat(a, h). (<£=) Suppose for all trajectories cr, t ^ Sat(cr, g) implies that t -< Sat(a, h). LetT9eT\g). Then by Lemma 4.4, t ^ Sat(r9,g). By the assumption that g=^h, t ^ Sat(r9, h). By Lemma 4.3, 3<Jfc e A"(/i) such that Sh Q r 9 . As T9 was arbitrary, the proof follows. • Appendix A. Proofs 217 A.3 Proofs of Compositional Rules for T L n Recall that in this section we are dealing solely with the realisable fragment of TLn. Theorem A.10 (Identity - Theorem 5.14). For all g e TL„, g=og. Proof. Let t = Sat(a,g). Clearly then t = Sat(a,g). Hence g=s>g. • Lemma A . l l . Suppose g=C>h. Then Next g=$> Next h Proof. Let a G UT 3 t = Sat(a, Next g). (1) t = Sat(a>i,g) Definition of Sat. (2) t = Sat(a>i,h) g=$>h. (3) t = Sat(Xa>u Next h) Definition of Sat. • (4) t = Sat(a, Next h) (3), monotonicity of Sat, Lemma 4.8. (5) Next g =>Next h. Corollary A.12 (Time-shift - Theorem 5.15). Suppose g=oh. Then Vt > 0, Next*<?=r>Next</i. Proof. Follows fromLemmaA.il by induction. • Theorem A.13 (Conjunction - Theorem 5.16). Suppose 0i ==t>hi and g2 =t>/i2-Then 0i A 02 = t > ^ i A / J 2 -Proof. Let cr e TZT and suppose t = Sat(a, g\ A 02). Appendix A. Proofs (1) t = Sat(a,gi) A Sat(a,g2) Definition of Sat(a,gi A g2). (2) t = Sat(a,gi), i = 1,2 Lemma 3.2(2). (3) t = Sat(a,hi), i = 1,2 Sincegi=o/ii, i = 1,2. (4) t = Sat(a,hi) ASat(a,h2) (3) (5) t = Sat(a, hihh2) Definition of Sat(a, hx A h2). As cr is arbitrary, gi A g 2 =t>hi/\h2. Theorem A.14 (Disjunction - Theorem 5.17). Supposegi=t>/ii andg2=t>h2. Then gx V g2=t>hi V / i 2 . Praq/! Let cr G 7?.r a n d suppose t = Sat(a,gi V g2). (1) t = Sat(a,gi)V Sat(a,g2) Definition of 5ar(cr, ^  V g2). (2) t = 5af(cr, ^ j ) , for i = 1 or i = 2 Lemma 3.2(1), Lemma 4.8. (3) t = Sat(a, hi), for i = 1 or i = 2 Since gi==i>hi, i = 1,2. (4) t = Sat(a,hi) V Sat(a,h2) (3) (5) t = Sat(a,hi V /i 2) Definition of Sar(<7, V /i 2). As cr is arbitrary, gi V g2=S>hi V fc2. Lemma A.15. Suppose A*(5f) CT> A k(/i), a G 7^ r and t = Sctf(cr, fc). Then t = Sat(a,g). Proof. (1) t^Sat(<T,h) t = Sat(a,h). (2) 3£ G A*(fc) 9 <jCcr and t =< Sa^c), fc) (1), Lemma 4.3. (3) 38' G A^s-) 9 5' • 8 Definition of Qv . (4) t < Sat(8',g) Lemma 4.3. Appendix A. Proofs 219 (5) 8' C a Transitivity of (2) and (3). (6) t X Sat(a, g) From (4) and (5) by Lemma 4.3. (7) t = Sat(a,g) (1), Lemma 4.8. • Theorem A.16 (Consequence - Theorem 5.18). Suppose g=S>h and Ak(g) \ZV Al(gi) and A\hi) Q-p A^h). Then 0 i= o / i i . Proof. Suppose a £ TZj- is a trajectory such that t = Sat(a, gi). (1) t = Sat(cr,g) Lemma A.15. (2) t = Sat(cr,h) g=$>h. (3) t = Sat( a,Lemma A.15. (4) g1=c>hi Since a is arbitrary. • Theorem A.17 (Transitivity - Theorem 5.19). Suppose gl=c>h1 andg2=c>h2 and that At(g2) C.v At(g1) II A*( / i i ) . Then gi=t>h2. Proof. Suppose a £ TZT is a trajectory such that t = Sat(a, gi). (1) t = Sat(a,hi) g1==c>hi (2) t = Sat(a,giAhi) Definition of Sat(a, gi A hi). (3) 35 £ A*(flfi A / i i ) 9 5 C cr Lemma 4.3. (4) A*(01 A hi) = At(gl) II A*( / i i ) By definition of A*. (5) 35' £ A t ( 0 2 ) 3 8'\Z8 A\g2)nv A\gi)U A\hi). Appendix A. Proofs 220 (6) 8'Q a Applying transitivity to (3) and (5). (7) t d Sat(a, 92) From (6) by Lemma 4.3. (8) t = Sat(a 92) From (7) by Lemma 4.8. (9) t = Sat(a h2) g2=0>h2. (10) 9i = *>h2 Since a was arbitrary. • Lemma A.18 (Substitution Lemma). Suppose |= (\ g=z>h } and let £ be a substitution: then f= i\ £(g)=$>£(h) Proof. Let (j) be an arbitrary interpretation of variables and a £ TZr be an arbitrary trajectory such that t = Sat(v,<f>(Z(g))). (1) Let0' = 0o£ (2) t = Sat(a,(f>'(g)) Rewriting supposition. (3) <f>' is an interpretation of variables By construction. (4) t = Sat(a,(f>'{h)) \= $g=i>h). (5) t = Sat(a,(f)(t(h))) Rewriting (4). (6) |= { £(g)=t>£(h)} <j) a n d cr were arbitrary. • Lemma A.19 (Guard lemma). Suppose e G £ and |= \g=$>h \: then |= {(e 0)=o(e /i) ). Proof. Suppose t = Sat(a, e ^ g) for some a £ 7£<r- Recall that e =^ g = (->e) V 0, and note that Sat(a, ->e) G By the definition of the satisfaction relation, either: (i) t = Sat(a, ->e). In this case, by definition of satisfaction, t = Sat(a, e =>• h). Appendix A. Proofs 221 (ii) t = Sat(a,g). In this case, by assumption Sat(a, h). So, by definition of satisfaction, t = Sat(a, e =>• h). As a was arbitrary the result follows. Theorem A.20 (Specialisation Theorem — Theorem 5.20). Let E = [(ei, £ 1 ) , . . . , (e„, £ n)] be specialisation, and suppose that |= {] g=S>h ThenHHGjO^E^n. • (1) Fori = 1,... ,n , h U.-(flO=i>6('Or-(2) H(e,-^ 6(s))=^ (e,-=> (3) h 4 A?=1(et- e<(0))=> A?=1(e,- =* £(/i)) (4) |= ^S(^)=l>S(/»)^ By Lemma A. 18. By Lemma A. 19. Repeated application of Theorem A. 13. By definition. • Theorem A.21 (Until Theorem — Theorem 5.21). Supposegl=z>h\ and g2=l>/i2. Then g\ Untilg 2=f>^i Until h2. Proof. Let a £ TZT be a trajectory such that t = Sat(a,gi Untilg 2)-(1) 3i3 t = A}~0 Sat(a, Next^i) A Sat(cr, Next^2) (2) t = Sat(a,Nextig2) and t = Sat(a, Next^gi), j = 0,... , i' — 1 (3) t = Sa^Next*^) and t = 5ar(cr,Next-7fci), j = 0,... ,i — 1 (4) t = AJ~o 5ar(cri? Next^i) A San>,-, Next8'/i2) (5) t = Sat(a,hi Until / i 2 ) (6) gi Untilg2=t>hi Until fc2 Definition of Sat, Lemma 3.2(1), Lemma 4.8. Lemma 3.2(2). g2=C>h2, Corollary A. 12. g1=£>hi, Corollary A.12. Definition of Sat. Definition of Sat. Since a was arbitrary. • Appendix B Detail of testing machines This chapter presents the details of testing machines. Section B . l formally defines composi-tion of machines. Subsequent sections build on this by showing how testing machines can be constructed and composed with the circuit under test: Section B.2 presents some notation used; Section B.3 presents the building blocks from which testing machines are constructed; and Sec-tion B.4 shows how model checking is accomplished. B . l Structural Composition The focus of the research on composition is the property composition described in Chapter 5. However, sometimes it is also desirable to reason about different models and use partial results to describe the behaviour of the composition of the models. A full exploration of composing models of partially ordered state spaces is beyond the scope of this thesis — there are important considerations which need attention [129]. A partial exploration of the area is useful though for two reasons: (1) it gives a flavour of how structural composition could be used; and (2) some of the definitions given are needed in justifying the details of testing machines. The content of this section is very technical. Although conceptually the composition of sys-tems is very simple, the notation needed to keep track of the detail is not. This section is included for completeness and the details of this section are not needed in understanding the thesis. This section has three parts. First, composition of models is defined formally. Second, in-ference rules for reasoning about a composed model is given. The third part elaborates on com-position for circuit models, where the definition of composition has natural instantiation. 222 Appendix B. Detail of testing machines 223 B.l.l Composition of Models Definition B.l. Let Mi = ((Si, ^i),KuYi),M2 = « S 2 , • 2),U2, Y 2 ) , and M = ((S, C),7e ,Y)be models. Let X i , X 2 and X be the bottom elements of Si, S2 and S; Z i , Z 2 , and Z be their top elements; and let Gi, G2 and G be the simple predicates of «S*i, S2 and S respectively. If p : Si x S2 -» S, pi : G\ —> G and p2 : G2 -¥ G then M is a p-composition of Mi and M2if 1. pis monotonic; 2. p(X 1 ,X 2 ) = Xand. /o(Z 1,Z 2) = Z; 3. q = gi(si) ==• q = pi(gi)(p(suX2)); 4. q = g2{s2) =>• q = p2(g2)(p(Xus2)); 5. p(Yi(si),Y2(s2))QY(p(si,s2)). The required properties on p may seem onerous, and indeed in general they may be too restric-tive. However, for the application of composition needed in this thesis they are sufficient. In particular, for compositions where the 'outputs' of one circuit are connected to the 'inputs' of another, these conditions will be met. Definition B.2. We inductively extend the domain of the pi by defining • Pi(gAh) = pi(g) A pi(h); • pii^g) = -"(^ (sO); • pi(Nextg) = Next pi(g); • pi(g\Jntilh) = pi(g) Until pi(h). Appendix B. Detail of testing machines 224 Definition B.3. Let Mi, M2 and be models and M a p-composition of Mi and Mi- Let cr 1 £ <S^ , cr 2 £ <S^ . ^ ( ( J 1 , ^ 2 ) = p ( a i , a 2 )p ( c r 1 1 , c r 1 2 ) . . . Since we are dealing with different models, we modify the notation for the satisfaction relation 9-Lemma B.l. Let Mi and M2 be models and M a /^-composition of Mi and M2. Let a* £ <Sf, j = 1,2. Suppose g £ TL(Sj) and c/ = SatMj{cr\g). Then ^ SatM(p(a1,a2), pj(g))-Proof. The proof is by induction on the structure of g. We assume without loss of generality that j = 1. Suppose q = Sat^ (cr 1, g) where g is simple (1) q = g(aJ0) Definition of satisfaction. and use the notation SatM, (cr, g) to refer to whether the sequence cr of the model M j satisfies (2) q = Pi(g){p{<Th,X2)) (3) q^PiigM^a2)) (4) q± SatM(p(cr\cr2),pi(g)) Suppose q = SatMl {el,ga A gb) (1) Letc/u, = SatMi{vl-,gw)-, w = a,b Monotonicity. Definition B.l(3). Definition of satisfaction. (2) q = qaAqb Definition of satisfaction. (3) qw •< SatM(p(a1,a2),pi(gw)),w (4) q< SatM(p(o-\cr2),pi(gaAgb)) Suppose q = Sat^ vi, (cr 1, ->h) (1) -ig = SatMl(crl,h) a, b Inductive assumption. Definition B.2 and satisfaction. Sat(a\-^f) = ^Sat(cr1,f) Appendix B. Detail of testing machines 225 (2) -. g r< SatM{p{<j\cT2),Pl{h)) (3) q^ SatM{p{al,a2),pi{^h)) Suppose q = Sat^ij (cr 1, N e x t h) (1) q= SatMl(aluh) (2) q-< SatM(p(o'l.1,all),pi(h)) (3) q^ 5flf(p(<7 1 ,<7 2 ) , / 5i(Next/i)) Suppose q = Sat^ (cr 1, / i ! U n t i l / i 2 ) (1) Let c/j = Safx^a^g, / i i ) A . . . A Sa? (2) q = V?=0qi (3) r< 5ar>i(/o((7^0,<7|o),Pi(^i)) A ... A Sar^ (p (al3-, <r| ^ ) , px ( / i 2 ) ) (4) q± y%0(SatM{p{ Pr(hi))A (5) q^ SatM(p(cr\cr2),hl\]ntilh2) Inductive assumption. Definition B.2 and satisfaction. Sat(al, N e x t h) — Sat(a^1,h) Inductive assumption. Definition B.2 and satisfaction. {o-l^hi) A SatMl(o-ljih2) Definition of satisfaction. 5ar7w(p(cr> J _ 1 ,cr 2 , J _ 1 ) ,pi( / i i ))A From (1) by induction assumption. . . . A S ^ A 1 ( / > ( c r > i _ 1 , a | J _ 1 ) , p 1 ( / i i ) ) A (2) and (3) Definition of satisfaction. • Lemma B.2 . Let Mi, M2 and be models and M a p-composition of Mi and M2. Let i 6 {1,2}, and suppose that: (1) p is a surjection; (2) cr = p(al,a2), and t •< 5a?x(cr, pi(gi)) implies that t •< SatMi(a\gi). Then K< 1 Pi(9i)^>Pi{h)) iff KM, 4 h Appendix B. Detail of testing machines 226 Proof. Suppose \=M. <] gi=$>hi \ (1) Suppose t •< SatM(a, pi(gi)) (2) 3a 1, a2 3 cr = p(a1,a2) p is a surjection. (3) t < SatMi(cr\gi) By hypothesis (2). (4) t -< SatM, (o-'', hi) Since 1=^ . t\ gi=?>hi \ . (5) t< SatM(a,pi(hi)) ByLemmaB.l. (6) Therefore ^ <| p{ (g{) =§>p; (hi)) Suppose \=M. § pi(gi)=§>pi(hi) ) (1) Suppose t X SatMt(<Ti,gi) (2) t ^ Sar t^ (cr, pi (gi)) Lemma B. 1 (3) t -<SatM(cr,pi(hi)) SatMi{cri,gi). (4) t •< SatM {o~,hi) By hypothesis 2. • B.1.2 Composition of Circuit Models For circuit models, there are several natural definitions of composition that have useful proper-ties. There are, of course, other ways of composing circuits, but the one discussed here is simple and useful. Let Mi = ((Cm\ n),Ami,Yi) and M2 = ((Cm\ C ), Am\ Y 2 ) be two models. For circuit models, the next state function Y : Cn —> Cn is represented as a vector of next state function (Y[ l ] , . . . ,Y[n]) where each Y[j] : Cn -> C and Y(s) = (Y[l](s),... ,Y[n](s)). To compose two circuit models, we identify r pairs of nodes (each pair comprising one node of both circuits) and 'join' the pairs (i.e., informally, think of these pairs as being soldered to-gether, or physically identical). The state space of the composed circuit consists of mi + m2 — r components. The first mi — r components are the components of Mi that are not shared with Appendix B. Detail of testing machines 227 M2. The next m2— r components are the components of M2 that are not shared with Mi. The final r components are the r components shared by both Mi and M2. The formal definition of composition is a little intricate since it identifies state components by indices. The difficult part of the definition is identifying for each state component in the composed circuit the component or components in Mi and M2 that make it up.1 The idea is simple — the book-keeping is unfortunately off-putting. Let Ii — ( a i , . . . , ar), and Ji = (a[,... , a' _r) be lists of state components of Mi. If s £ Si, then the s[aj] are the components of the state space that are shared with M 2 and the s [a'j] axe the components of the state space that are not. We place the natural restriction that Ii and Jx are disjoint, and that their elements are arranged in strictly ascending order. I2 = (61 , . . . ,br) and J2 = (b[,... , b'm2_r) are the corresponding of lists for M2. Each (a,-, bj) pair is a pair of state components that must be 'joined'. Let convi(j) be the component of M to which the j-th component of Mi contributes. For-mally, define ( k when 3a'k € J i 9 a'k = j mi + m2 — r + k when Ba^ € h 9 a-k — j mi - r + k when 3b'k £ J2 9 b'k = j mi + m2 — r + k when 3bk £ h 3 bk = j Since the Ii and J, are distinct, we can define an inverse to convi. Define indexi(j) = k where convi(k) — j. Note that indexi is not defined on all of {1,... , mi + m2 — r], but that where it is defined, indexi(j) is the component of the state space of Mi which contributes to the j-th component of M. With this technical framework, composition can be defined easily. If si £ 1In practice, composition is a lot easier. Nodes are labelled by names drawn from a global space. We use the convention that if the same name appears in both circuits, then the nodes they label are actually the same physical node. Thus, the pairs that must be connected are implicit and do not have to be given. Appendix B. Detail of testing machines 228 Cmi and s2 e C™2, define Q(SUS2) by Q{SI,S2) = (si[indexi(l)],... , si[indexi{mi — r)], s2[index2{mi — r + 1)],... , s2[index2(mi + m2 — 2r)], si[mflfexi(mi + m 2 — 2r + 1)] U s2[index2(mi + m 2 — 2r + 1)],... , .si[/ndfexi(r)] U s2[ina'ex2(m1 + m2 — r)] ) Part of defining the composition is to define the mapping from simple predicates in M i and M2 to M. For T L n , this is easy since it is only for predicates of the form [j] that a non-trivial mapping has to be defined. Define convi(j)] when g = [j] for some j when g a constant predicates. Then define where Yi[wwfej:i(j)]((s[co/iVi(l)],... ,s[coravi(rai)])), j < mi — r Y[j](s) Y 2 [ i /Mte*2 ( j ) ] ( (s [c0 / iv 2 ( l ) ] , . • • , s[c0 / iv 2 (m 2 )])) , mi — r < j < mi + m 2 — 2r Yi[/nflfexi( i;)]((5[convi(l)],... , s[convi(mi)])) U Then X is \Y 2{index2{j)){(s[conv2{\)],... , s[corav2(m2)])), ? a ^-composition of A ^ i and M2, denoted g{M\,M2). m\ + m 2 — 2r < j Appendix B. Detail of testing machines 229 Lemma B.3. Q meets all the criteria given in Definition B. 1. Proof. (1) Q is monotonic. Q is defined component-wise. Each component is constructed from the identity and join functions. Since both of these are monotonic, monotonicity follows. (2) £(UTOl,Um2) = u m i + m 2 ~ r a n d £ ( Z m i , Z T O 2 ) = Zmi+m2-r Follows straight from the definition of g. (3) q = 51(61) implies thatq ^ /0i(0i)(y>i(si, U™2)) a. Suppose q = # i ( s i ) , let s = Q(SI, Um 2) . b. si[j] = s[convi(j)] From definition of g(si, U7™2). c. Let ^  G C i . Suppose gx — [j] for some j. d. P i ([j]) = [convi(j)]. Definition of 0 i . e- =9i(si) (b)and(d). f. gi(gi)(s) = q By assumption and (e). Otherwise gx must be one of the constant predicates {_L, f , t, T } . g- Qi{9i)=9i Definition of g. h- q = Qi{gi)(s) Q\{g\) is constant. (4) Similarly q = g2{s2) => q r< 02(02X0(1^, s2)). (5) ^ ( Y i ( s i ) , Y 2 ( 3 2 ) ) r< Y(^ ( s i , s 2 ) ) . Proved by showing for all j , Appendix B. Detail of testing machines 230 Q{Y1(s1),Y2(s2))\j]^Y\jMsus2)). Suppose j < mi — r. £(Yi( 5 i) ,Y 2 (s 2 ))[j] = Yi(si)[i/ufe*i(j)] = Y1[indexi(j)](s1) = Yi[indexi(j)] = ((s[convi(l)],... ,s[co/iVi(mi)])) = Y[j](.) Similarly if mi — r < j < mi + m 2 — 2r, ^ (Yi ( 5 i ) , Y 2( 5 2))[j] = Y2[/mfcc2(j)](s2) = Y[j](s) Suppose mi + m 2 — 2r < j £(Yi(6i ) ,Y 2 ( . 2 ) ) [ j ] = Yi(si)[tmfej:i(j)] U Y 2(s 2)[m^x 2(j)] = Yi[mJexi(j)](si) U Y2[iWex2(j)](s2) = Y[j](,) • Lemma B.3 is important because it means that Lemma B . l can be used. Furthermore, where composition of machines is done is such a way that the 'outputs' of one machine are connected to the 'inputs' of the other (so there is no 'feedback' — signals go from one machine to the other, but not vice versa.), Lemma B.2 applies too (to the circuit that provides the outputs). This definition is dependent on 71? I2, J\ and J2; for convenience, the following short-hand is used: Q(A, B)(ri,... , r*) refers to the composition of A and B where the r 1 ? . . . , com-ponents of A are shared with the first k components of B; formally Ii = ( r l 5 . . . ,rk), Jx = (1,. . . , rx - 1, r i + 1,... , r f c - 1, r f c + 1,... , k), I2 = (1,. . . , k) and J2 = (k + 1,... , k). Appendix B. Detail of testing machines 231 B.2 Mathematical Preliminaries for Testing Machines This section assumes the state space of the system is Cn for some n. There is nothing inherent in the method which limits the state space to this. However, from a notational point of view it is easier to explain the method with this simple case; furthermore, this is the important, practical case. The method generalises easily to an arbitrary complete lattice. Suppose that M i and M2 have both been derived from a common machine, M , using a sequence of compositions. (Assume that M = (C n ,Y*) , M i = ( C " + m i , Y i ) and M 2 = (Cn+m2, Y 2 ) .) By the definition of composition the two next state functions Y x and Y 2 re-stricted to Cn are identical and the same as Y * . M{ (i = 1,2) consists of M and a tester T2. The relative composition of M i and M 2 with respect to M is the composition of M , 7\ and T 2. All of this could be described by composition, but it is convenient to define a specific notation. Formally, reLcompM(M1, M 2 ) = (Cn\ Y ) , where n' — n + mi + m 2 and if the current state is s, (ti,... , tn>) = Y(s) is defined by: ^ Y 2 ( ( s i , . . . , s n , s n + m i + i , . . . ,sn>))\j] when n +mi < j < n' B.3 Building Blocks Basic Block BBA Some predicates may depend (in some way) on the value of node 1 at a time t\ and node 2 at time t2. (Formally, a 'node' is a component of the state space; informally since we are reasoning about physical circuits nodes are wires in the circuit.) The purpose of BB A is to provide delay slots so that both values of interest are available at the same 'time'. BBA takes two parameters: a function g : C —> C which is used to combine the values, and n which indicates how many delay slots need to be constructed. Figure B . l depicts BBA(g, 4). when 1 < j < n + m\ Appendix B. Detail of testing machines 232 a Figure B . l : BBA(g,3): a Three Delay-Slot Combiner BBA(g, n) consists of two nodes which act as input nodes, n delay slots, and one node which is the output value of the machine. The two inputs nodes are typically part of the original circuit, which is why BBAs next state function does not affect the first two components. Formally, JL, when j = 1,2; BBA(g,n) = (Cn+3,Y) where if t = Y(s),then^ = < when j = 3 , . . . n + 2; 5 r ( s i , 5 n _ i ) , whenj = n + 3. The comp Jest operator adds the BBA circuit to an existing circuit. Given a machine M and a predicate g which depends on the value of ix at time 11 and i2 at time t2, the composite machine (M plus the testing circuit) is defined by: compJest(M,g, (W2)) (B.l) QiMiBBAfah -t2))(H,i2) ifh>t2 g(M,BBA(g,t2 -ti))(i2,ii) otherwise. The problem with the BBA testing machine is that if the two defining times are far apart, the testing circuit could be large due to the need to retain and propagate values, which has both space and computation costs associated. There is an alternative approach - build a memory into the circuit which keeps the needed information. Define BBA(g) = ( C 5 , Y ) . If the state of the machine is ( s i , . . . , s 5 ) , think of sx and s2 as the inputs, and s 5 as the output. s4 is used as the memory, and s3 indicates whether Appendix B. Detail of testing machines 233 the memory's value should be reset or maintained. Formally, when j 1,2,3 when j 4 and s 3 = 0 < (B.2) whenj 4 and s3 = 1 when j 5 Define compJest(M,g, (ti, «i) (<2 , i 2 ) ) g{M,BBA{g))(i1,i2) if * i > t2 (B.3) p ( M , BBA(g)){i2, i\) otherwise. Although in general the definition of equation B.3 will be more efficient, it cannot always be used. To see why consider this example. Let g and h be two predicates containing no temporal operators, where the result of g can be found at node ix and the value of h at node i2. Suppose we want to evaluate the predicate g A Next3/*. Implicit in this is that we are interested in ix at time 0 and i2 at time 3. For this we could use the second implementation of comp Jest and get the new machine compJest(M, (Xx, y.x A y), (0, i i ) , (3, i2)). Now suppose that we are interested in the predicate Exists [(0,10)] (g A Next3/i). This asks whether there is a time t between 0 and 10 such that g holds at time t and h holds at time t + 3. For this predicate, the second implementation will not work since it only remembers the value of g at one particular time, and we need to have the value of g at a sequence of times. The general rule in choosing between the implementations is that if the predicate for which the tester being constructed is within the scope of temporal operators such as E x i s t s and G l o b a l , then the first implementation must be used; otherwise the second, more efficient im-plementation can be used. Appendix B. Detail of testing machines 234 Basic Block BB B BBB is used when we need to combine the value of a predicate at a number of different times. For example, Globalg and Existsg depend on the value of g at a sequence of times. Define BBB(g,k) = ( C * + 2 , Y ) where if t = Y(s) then: JL when j = 1 S! when j = 2 f(sj,Sj-i) otherwise. Figure B.2 depicts this graphically. t3 = { Figure B.2: BBB(g,4) Basic Block BB c This is just a simple latch with a comparator. 5 5 c = (C 3, Y ) where Y((s i , s2, s3)) = (-L , 5 2 , 5 l = S2). Inverter Define / = ( C 2 , Y ) where Y ( ( s i , s 2 » = ( J - , - -* ! ) . B.4 Model Checking This section shows how to accomplish the following: Appendix B. Detail of testing machines 235 Given a machine M and an assertion of the form |= t\ A=Og construct a ma-chine M' and trajectory formulae A', C such that |= \ A=f>g } -£=^ > |=M» Every temporal formula, g, has an associated tuple (i,t, A, M')\ i indicates that the formula can be evaluated by examining the i-th component of the state space of the new machine; t indi-cates the time at which the component should be examined; A' gives a set of trajectory formulas which are used as auxiliary antecedents for the new machine; and M' is the new machine. The tuple is defined recursively on the structure of the temporal formula.2 1. g is ([i] — v). The tester which checks this compares the value of node i to v. The asso-ciated tuple is (n + 2,1, A', M'), where • A' = ([n + 2]=v) • M' = g(M,BBc){i). 2. g is Nextjg'. This does not require any extra circuitry — the tester that tests g' is already built in, and the only difference is that the result is checked at a different time. If the tuple associated with g is (i,t, A, M'), the tuple associated with Next jg is (i, t + j, A, M'). 3. gis~>g'. If the tester for g' is already built, an inverter will compute the answer for g. So, if the tuple associated with g is (i, t, A, M'), the tuple associated with -•/ is ( | M " | , t + 1, A, M"), where M" = g(M, 4. g is g'(gi,g2). Typically g' would be conjunction or disjunction. The tester takes as its input the results of gx and g2 and applies g' to them. Let the tuple associated with gx be (ii, tx, A i , M i ) and the tuple associated with g2 be (i2,t2, A2, M2). Assume that \M\ \ — n + mi and \M2\ = n + m2. 2Note that in this discussion M refers to the original machine, and n = \M\. Appendix B. Detail of testing machines 236 The tuple associated with g(gi,g2) is ( |M' | , max(£i, tf2) + 1, M A A2, M') where M ' = compJest(M",g,(t1,n+ml),(t2ln+ml+m2)) andM" = rel.compM(Mi, M 2 ) . 3 5. fi- is Global [(i, j)] This can be computed as Next8'(NextV A . . . A^ext^-'V))-Evaluating this directly is too inefficient (since lots of redundant work will be done). The following approach computes g' exactly once and then provides appropriate circuitry to combine this value produced at various times. If the tuple associated with g' is (i1? t, A 1 ; Mx), where \MX | = mi then the new tuple associated with g is ( |M' | , t + 1, A ' , M') where: A smaller, more efficient testing machine can be built provided that Global operator is not nested within another temporal operator. In this case, suppose the tuple associated with g is (iu tx, Ax, Mx), where | M i | = mi. The new tuple is ( |M' | ,*i + 1,A',M') where: |^si A s2 when j = 2. • A'= Ai A(Next*1[mi + 1] = 1). 6. g = Exists [(i,j)] g'. This is analogous to the Global case, and can be computed as Next'(Next V V . . . V (Next^'-4'^')). All the remarks pertaining to the Global oper-ator apply to the Exists operator - the difference is that instead of conjunction being 3Recall that M is the underlying machine. . M' = g(M, BBB((Xx, y.x A y), (j - i)))(n) • M' — g(M,M")(ii) • M" = (C 2 ,Y) • ift = Y(s) then: when j = 1. Appendix B. Detail of testing machines 237 applied, disjunction is applied. This shows that De Morgan's laws have a direct corre-spondence in the testing machines. 7. The bounded strong until, weak until, and periodic operators are all derived operators (see Definition 3.8). A straight-forward approach to model-checking these operators is model-check their more primitive definitions. For all three, smaller and more efficient machines are possible too. There are competing threads here—the more operators the easier it is for a verifier to express properties, but with the wider choice comes the cost of greater complexity for the verifier. Con-structing testing machines for the derived operators using the primitive definitions is not the most efficient approach: if the operators are going to be used, optimised testing machines should be constructed; but, if they are not going to be used the verification system will be more com-plicated that it needs to be. There are two further types of optimisations which could be done. Testing machines are not canonical—there are different ways with different complexities of evaluation. The rewriting of formulas could yield improvement. The other issue has been discussed already: in some cases the testing machine needed for a formula depends on whether that formula is embedded within other temporal operators. If a formula stands by itself, then its satisfaction can checked by ex-amining one component of the testing circuit at one instant in time. However, if the formula is embedded within some of the temporal operators, then we need to know the satisfaction of the formula at a number of instants in time. Appendix C Program listing C l FL Code for Simple Example 1 let c_size = bit_width; let bwidth = ' c_size; let i = IVar " i " ; let j = IVar " j " ; let k = IVar "k"; let 1 = IVar "1"; let GND = "GND"; let a = Var "a"; let A = "a"; let B = "b"; let C = "c"; let D = "d"; let E = "e"; let FNode = "f"; let Globallnput = ((A ISINT i)_&_(B ISINT j)_&_(C ISINT k)) FROM 0 TO 100; let l i s t l = [ ( " i " , 1 upto 8), ("j", 9 upto 16), ("k", 17 upto 24)]; let varmapl = BVARS l i s t l ; let varmap2 = BVARS (("a", [25]):listl); let Al = Globallnput; let Cl = D ISBOOL (i > j) FROM 10 TO 100; let TI = VOSS varmapl (Al ==» Cl) ; let A2 = Globallnput _&_ ((D ISBOOL a) FROM 10 TO 100); let C2 = Globallnput _&_ ((E ISINT i WHEN a) _&_ (E ISINT j WHEN (Not a)) FROM 20 TO 100); let T2 = VOSS varmap2 (A2 ==» C2); let A3 = E ISINT 1 C ISINT k FROM 20 TO 100; let C3 = FNode ISINT (1 '+ k) FROM 50 TO 100; let T3 = VOSS varmapl (A3 ==» C3) ; let proof = let GI = SPTRANS [] TI T2 in let G2 = SPTRANS [] GI T3 in G2 ; 238 Appendix C. Program listing 239 C.2 FL Code for Hidden Weighted Bit let N = bit_width; let x = IVar "x"; let InputNode = let BufferNodel= let Chooser = let j = IVar " j " ; " InputNode"; "bufferl"; "chooser"; "error"; let CountNode = "CountNode let BufferNode2= "buffer2"; let Result = "result"; let Error let varmapl = BVARS([("x",1 upto N)]); let BufferTheorem = VOSS varmapl ((InputNode ISINT x FROM 0 TO 1000) ==» ( (BufferNodel IS INT x) _&_ (BufferNode2 ISINT x) FROM 5 TO 1000)); letrec add_bits x num = x = N => BIT2 ('N) num | (BIT2 C x ) num) '+ (add_bits (x+1) num); letrec count_of num = add_bits 1 num; let CounterGoal = (BufferNodel ISINT x FROM 0 TO 990) ==» (CountNode ISINT (count_of x) FROM 400 TO 990); let CounterTheorem = VOSS varmapl CounterGoal; let stagel = CONJUNCT BufferTheorem (AUTOTIME [] BufferTheorem CounterTheorem); let seg x = BWID ('Nbit) x; let kthBit k var = (BIT2 ('k) var) '= (BIT2 ( ' 1) C D ) ; letrec case_analysis var j = letrec case k = " k=l => Result ISBOOL (kthBit k var) WHEN (j '= (seg C k ) ) ) | (Result ISBOOL (kthBit k var) WHEN (j '= (seg ('k)))) _& (case (k-1) ) in case N; infix 3 ISBOOL_VEC; letrec ISBOOL_VEC [x] [y] = x ISBOOL y A ISBOOL_VEC (x:rx) (y:ry) = (x ISBOOL y) _&_ (rx ISBOOL_VEC ry); let ChooserGoal= ((CountNode ISINT j FROM 0 TO 400) _&_ (BufferNode2 ISINT x FROM 0 TO 400)) ==>> ( ( (case_analysis x j) _&. Appendix C. Program listing 240 (Error ISBOOL (seg('O) '= j ) ) ) FROM 300 TO 400); l e t ChooserTheorem = VOSS varmap2 ChooserGoal; l e t Proof = ALIGNSUB [] stagel ChooserTheorem; C.3 F L Code for Carry-Save Adder l e t A = Nnode "A"; l e t B = Nnode "B"; l e t C = Nnode "C"; l e t D = Nnode "D"; l e t E = Nnode "E"; l e t a = Nvar "a"; l e t b = Nvar "b"; l e t c = Nvar "c"; l e t bdd_order = o r d e r _ i n t _ l [b, c, a]; l e t l e t l e t l e t l e t range sum_lhs sum_rhs Antl Conl (bit_width-l)--0; (D+E) « r a n g e » ; (a+b+c) «range>> ; ((A == a)??) and ((B == b) ??) and ((C NextG 3 ( (sum_lhs == sum_rhs) ??); c)??) ; l e t T l = prove_voss bdd_order adder Antl Conl; C.4 FL Code for Multiplier // miscellaneous l e t high_bit = entry_width l e t max_time = 800; l e t out_time = 3; - 1; // 0..entry_width-l II- Node, va r i a b l e declarations l e t l e t l e t l e t l e t l e t l e t l e t l e t A = Nnode AINP; B = Nnode BINP; RS i = Nnode (R_S i ) ; RC i = Nnode (R_C i ) « (high_bit-l)--0» ; TopBit i = Nnode (R_C i) <<high_bit»; a = (Nvar "a") « (entry_width-l)--0»; b = (Nvar "b")« (entry_width-l)--0»; c = Nvar "c"; d = (Nvar "d")« (high_bit-l)--0»; Appendix C. Program listing 241 l e t p a r t i a l {n : : i n t } = c <<(n+high_b i t ) - -0>>; / / BDD v a r i a b l e o r d e r i n g f o r e a c h s t a g e o f m u l t i p l i e r l e t m_bdd_order { n : : i n t } = n = 0 => o r d e r _ i n t _ l [b, a] | n = e n t r y _ w i d t h => o r d e r _ i n t _ l [ p a r t i a l n , d] | o r d e r _ i n t _ l [b<<n>>, a , p a r t i a l n , d ] ; l e t z e r o _ c o n d i = ( ( T o p B i t i ) = = ( ' 0 ) ) ? ? ; l e t i n t e r v a l n = n <= e n t r y _ w i d t h => [ ( ' ( n * o u t _ t i m e ) , 'max_t ime)] | [ ( ' ( n * o u t _ t i m e + 2 * e n t r y _ w i d t h ) , ' m a x _ t i m e ) ] ; l e t I n p u t A n t s = A l w a y s ( i n t e r v a l 0) (( (A == a) ?? ) and ( (B == b) ? ? ) ) ; l e t O u t p u t C o n s = l e t l h s = RS e n t r y _ w i d t h i n l e t r h s = (a * b) « ( 2 * e n t r y _ w i d t h - l ) - - 0 » i n A l w a y s ( i n t e r v a l ( e n t r y _ w i d t h + l ) ) ( ( l h s = = r h s ) ? ? ) ; / / A n t e c e d e n t f o r row n o f t h e m u l t i p l i e r l e t MAnt { n : : i n t } n = 0 => A l w a y s ( i n t e r v a l 0) ( ( (A == a ) ? ? ) and ( ( B « n » == b « n » ) ? ? ) ) | A l w a y s ( i n t e r v a l n) (( (A == a ) ? ? ) and ( ( B « n » == b « n » ) ? ? ) and ( (RS (n-1) == ( p a r t i a l (n-1)))??) and ( (RC (n-1) == d ) ? ? ) and ( z e r o _ c o n d (n-1)) ); / / Consequent o f row n o f t h e m u l t i p l i e r l e t r e s _ o f _ r o w n = l e t power n = Npow ('2) ('n) i n l e t l h s = (RS n) + (power ( n + l ) ) * ( R C n) i n l e t r h s = n=0 => a * b < < 0 » | ( ( p a r t i a l (n-1)) +(power n) * d) + (power n) *a * (b « n » ) i n ( ( l h s == r h s ) ? ? ) ; Appendix C. Program listing 242 C o n _ o f _ s t a g e n = l e t p o w e r n = Npow ('2) ('n) i n l e t l h s = (RS n) + ( p o w e r ( n + l ) ) * ( R C n) i n l e t r h s = a * b « n - - 0 » i n A l w a y s ( i n t e r v a l (n+1)) ( ( l h s == r h s ) ? ? a n d ( z e r o _ c o n d n ) ) ; MCon { n : : i n t } = A l w a y s ( i n t e r v a l (n+1)) ( ( r e s _ o f _ r o w n ) a n d ( z e r o _ c o n d n ) ) ; Mthm n = l e t b d d _ o r d e r = ( m _ b d d _ o r d e r n) i n l e t a n t = MAnt n i n l e t c o n = MCon n i n p r o v e _ v o s s b d d _ o r d e r m u l t i p l i e r a n t c o n ; p r e a m b l e _ t h m = l e t s t a r t = Mthm 0 i n P r e c o n d i t i o n I n p u t A n t s s t a r t ; d o _ p r o o f _ m a i n _ s t a g e n m p r e v i o u s _ s t e p = l e t c u r r = Mthm n i n l e t c u r r ' = G e n T r a n s T h m p r e v i o u s _ s t e p c u r r i n l e t c u r r e n t = P o s t c o n d i t i o n ( C o n _ o f _ s t a g e n) c u r r ' i n n = m => c u r r e n t | d o _ p r o o f _ m a i n _ s t a g e (n+1) m c u r r e n t ; l e t m a i n _ s t a g e = d o _ p r o o f _ m a i n _ s t a g e 1 h i g h _ b i t p r e a m b l e _ t h m ; l e t a d d e r _ p r o o f = l e t p o s t _ a n t _ c o n d = (( (RS h i g h _ b i t ) == ( p a r t i a l h i g h _ b i t ) ) ? ? ) a n d (( (RC h i g h _ b i t ) == d ) ? ? ) a n d (( ( T o p B i t h i g h _ b i t ) == ('0))??) i n l e t p o s t _ a n t = A l w a y s ( i n t e r v a l e n t r y _ w i d t h ) p o s t _ a n t _ c o n d i n l e t p o w e r = Npow ('2) ( ' e n t r y _ w i d t h ) i n l e t r h s = ( ( p a r t i a l h i g h _ b i t ) + p o w e r * d ) < < ( b i t _ w i d t h - l ) - - 0 » i n l e t p o s t _ c o n _ c o n d = ( ( R S e n t r y _ w i d t h ) -= r h s ) ? ? i n l e t p o s t _ c o n = A l w a y s ( i n t e r v a l ( e n t r y _ w i d t h + l ) ) p o s t _ c o n _ c o n d i n p r o v e _ v o s s ( m _ b d d _ o r d e r e n t r y _ w i d t h ) m u l t i p l i e r p o s t _ a n t p o s t _ c o n ; l e t p r o o f = G e n T r a n s T h m m a i n _ s t a g e a d d e r _ p r o o f ; Appendix C. Program listing 243 C.5 F L Code for Matrix Multiplier Proof // miscellaneous let high_bit = entry_width - 1 ; // 0..entry_width-l let max_time = entry_width < 10 => 100 | 10*entry_width; let clock_time = max_time; // half a clock cycle let out_time = 3; / / let prove_result = prove_voss_fsm; let prove_result_static = prove_voss_static; // Node, variable declarations // global let Clock = Bnode CLK; // individual cells let A u v = Nnode (AINP u v); let B u v = Nnode (BINP u v); let IN_C u v = Nnode (C_Inp u v); let OUT_C u v = Nnode (C_Out u v); let M = make_fsm sys_array; let RS u v i = Nnode (R_S u v i ) ; let RC u v i = Nnode (R_C u v i)<<(high_bit-l)--0>>; let TopBit u v i = Nnode (R_C u v i) «high_bit>>; let a = (Nvar "a") « (entry_width-l)—0»; let b = (Nvar "b" ) « (entry_width-l)—0»; let c = Nvar "c"; let d = (Nvar "d")«(high_bit-l)—0»; let e = Nvar "e"; let partial {n :: int} = e «(n+high_bit)--0>>; // BDD variable ordering for each stage of multiplier let m_bdd_order {n::int} = n = 0 => order_int_l [b, a] | n=entry_width => order_int_l [partial n, d] | order_int_l [b<<n>>, a, partial n, d]; // timings let Duringlnterval n f = During (n*out_time, max_time) f; letrec ClockAnt n = let range = 0 upto (n-1) in let false_range = map (\x.(('(2*x*clock_time), '(2*x*clock_time+clock_time-l)))) range in let true_range = map (\x.('(2*x*clock_time+clock_time), Appendix C. Program listing 244 '(2*(x+1)*clock_time-l))) (butlast range) in (Always false_range ((Clock == Bfalse)??)) and (Always true_range ((Clock == Btrue )??)); let InputAnts u v = Duringlnterval 0 ((A u v '= a) and (B u v '= b)); let zero_cond u v i = TopBit u v i '= ('0); // Antecedent for row n of the multiplier let MAnt u v {n::int} = n = 0 => Duringlnterval 0 ( ( A u v '= a ) and ( (B u v)<<n>> '= b<<n>>)) | Duringlnterval n (( A u v '= a) and ( (B u v)<<n>> '= b<<n>>) and ( RS u v (n-1) '= (partial (n-1))) and ( RC u v (n-1) '= d) and ( zero_cond u v (n-1)); // Consequent of row n of the multiplier let res_of_row u v n = let power n = Npow ('2) ('n) in let lhs = (RS u v n) + (power (n+l))*(RC u v n) in let rhs = n=0 => a * b <<0>> | ((partial (n-1)) +(power n) * d) + (power n) *a* (b « n » ) i n lhs '= rhs; let Con_of_stage u v n = let power n = Npow ('2) ('n) in let lhs = (RS u v n) + (power (n+l))*(RC u v n) in let rhs = a * b«n--0>> in Duringlnterval (n+1) ((lhs '= rhs) and (zero_cond u v n)); let MCon u v {n::int} = Duringlnterval (n+1) ((res_of_row u v n ) and (zero_cond u v n)); let Mthm u v n = let bdd_order = (m_bdd_order n) in let ant = MAnt u v n in let con = MCon u v n in prove_result bdd_order M ant con; let preamble_thm u v = print (nl^"Doing preamble"~nl) seq Appendix C. Program listing 245 l e t s t a r t = Mthm u v 0 i n ( s t a r t c a t c h s t a r t ) s e q P r e c o n d i t i o n ( I n p u t A n t s u v) s t a r t ; l e t r e c d o _ p r o o f _ m a i n _ s t a g e u v n m p r e v i o u s _ s t e p = l e t c u r r = Mthm u v n i n l e t c u r r ' = G e n T r a n s T h m p r e v i o u s _ s t e p c u r r i n l e t c u r r e n t = P o s t c o n d i t i o n ( C o n _ o f _ s t a g e u v n) c u r r ' i n ( p r i n t ( n l " " D o i n g M [ " " ( i n t 2 s t r u ) " " , n " ( i n t 2 s t r v ) " " ] ( " ~ ( i n t 2 s t r n ) ~ " ) " ~ n l ~ n l ) s e q ( c u r r e n t c a t c h c u r r e n t ) ) s e q ( n = m => c u r r e n t | d o _ p r o o f _ m a i n _ s t a g e u v (n+1) m c u r r e n t ) ; l e t m a i n _ s t a g e u v = d o _ p r o o f _ m a i n _ s t a g e u v 1 h i g h _ b i t ( p r e a m b l e _ t h m u v ) ; l e t a d d e r s _ p r o o f u v = l e t p o s t _ a n t _ c o n d = ( (RS u v h i g h _ b i t ) '= ( p a r t i a l h i g h _ b i t ) ) a n d ( (RC u v h i g h _ b i t ) '= d) a n d ( ( T o p B i t u v h i g h _ b i t ) '= CO)) i n l e t p o s t _ a n t = D u r i n g l n t e r v a l e n t r y _ w i d t h p o s t _ a n t _ c o n d i n l e t p o w e r = Npow ('2) ( ' e n t r y _ w i d t h ) i n l e t r h s = ( ( p a r t i a l h i g h _ b i t ) + p o w e r * d ) « ( b i t _ w i d t h - l ) - - 0 > > i n l e t p o s t _ c o n _ c o n d = (RS u v e n t r y _ w i d t h ) '= r h s i n l e t p o s t _ c o n = D u r i n g ( e n t r y _ w i d t h * ( o u t _ t i m e + 2 ) , c l o c k _ t i m e ) p o s t _ c o n _ c o n d i n l e t b d d _ o r d e r = m _ b d d _ o r d e r e n t r y _ w i d t h i n ( p r i n t " D o i n g a d d e r " s e q ( p o s t _ c o n c a t c h p o s t _ c o n ) ) s e q p r o v e _ r e s u l t b d d _ o r d e r M p o s t _ a n t p o s t _ c o n ; l e t c e l l _ o u t _ t i m e = [ ( ' ( 2 * c l o c k _ t i m e ) , ' ( 3 * c l o c k _ t i m e ) ) ] ; l e t r e g i s t e r _ p r o o f u v = l e t c _ a n t = ( ( ( R S u v e n t r y _ w i d t h ) '= ( p a r t i a l e n t r y _ w i d t h ) ) a n d ( ( I N _ C u v) '= c ) ) i n l e t c _ a n t ' = ( C l o c k A n t 2) a n d ( D u r i n g ( e n t r y _ w i d t h * ( o u t _ t i m e + 2 ) , c l o c k _ t i m e ) c _ a n t ) i n l e t c _ r h s = ( p a r t i a l e n t r y _ w i d t h ) + c i n l e t c _ c o n = (OUT_C u v) '= c _ r h s i n l e t c _ r e g = p r o v e _ r e s u l t ( o r d e r _ i n t _ l [ c , p a r t i a l e n t r y _ w i d t h ] ) M Appendix C. Program listing 246 c _ a n t ' ( A l w a y s c e l l _ o u t _ t i m e c _ c o n ) i n ( ( p r i n t " D o i n g r e g i s t e r " ) s e q c _ c o n c a t c h c _ c o n ) s e q c _ r e g ; // o n e _ p r o o f u v : p r o v e s t h a t t h e ( u , v ) - t h c e l l w o r k s // c o r r e c t l y l e t o n e _ p r o o f u v = // P r o v e t h a t m u l t i p l i e r p a r t s w o r k ( u n c l o c k e d ) l e t m _ s t a g e = m a i n _ s t a g e u v i n ( m _ s t a g e c a t c h m _ s t a g e ) s e q // t a k e i n t o a c c o u n t c l o c k i n g a n d t h e p a r t i a l sum i n p u t l e t n e w _ a n t s = I n p u t A n t s u v a n d ( C l o c k A n t 2) a n d ( D u r i n g l n t e r v a l 0 (IN_C u v '= c ) ) i n l e t new_thm = P r e c o n d i t i o n n e w _ a n t s m _ s t a g e i n // show t h e a d d e r p a r t o f t h e c e o l w o r k s l e t a _ p r o o f = a d d e r s _ p r o o f u v i n ( a _ p r o o f c a t c h a _ p r o o f ) s e q // A d d c l o c k i n g t o t h e a d d e r p r o o f l e t c o m p _ p r o o f = G e n T r a n s T h m new_thm a _ p r o o f i n // Show t h a t t h e r e g i s t e r s w o r k l e t r _ p r o o f = r e g i s t e r _ p r o o f u v i n ( ( r p r o o f c a t c h r _ p r o o f ) s e q // s t i c k t h e m a l l t o g e t h e r l e t r e s u l t = ( n o r m a l i s e C o n ( G e n T r a n s T h m c o m p _ p r o o f r _ p r o o f ) ) i n r e s u l t ) ; l e t r e c m a k e _ c e l l _ r o w _ l i s t p _ p r o c u v = v = a r r a y _ d e p t h => [] | l e t r e s = p _ p r o c u v i n p r i n t ( s n d ( t i m e r e s ) ) s e q ( r e s s e q ( r e s : ( m a k e _ c e l l _ r o w _ l i s t p _ p r o c u (v+1)))); l e t r e c m a k e _ p r o o f _ l i s t p _ p r o c u = u = a r r a y _ w i d t h => [] | ( m a k e _ c e l l _ r o w _ l i s t p _ p r o c u 0): ( m a k e _ p r o o f _ l i s t p _ p r o c (u+1)); l e t c e l l _ p r o o f _ l i s t = m a k e _ p r o o f _ l i s t o n e _ p r o o f 0; // Show t h a t t h e c e l l s a l s o p r o g a t e t h e i r A a n d B i n p u t s Appendix C. Program listing 247 l e t o n e _ p r o o f _ p r o p a g a t e A u v = l e t a n t s = ( D u r i n g l n t e r v a l 0 (A u v '= a ) ) a n d ( C l o c k A n t 2) i n l e t a b _ c o n = A u (v+1) '= a i n l e t a b _ r e g = p r o v e _ r e s u l t ( m _ b d d _ o r d e r 0) M a n t s ( A l w a y s c e l l _ o u t _ t i m e a b _ c o n ) i n a b _ r e g ; l e t o n e _ p r o o f _ p r o p a g a t e B u v = l e t a n t s = ( D u r i n g l n t e r v a l 0 ( ( B u v ) '= b ) ) a n d ( C l o c k A n t 2) i n l e t a b _ c o n = (B (u+1) v) '= b i n l e t a b _ r e g = p r o v e _ r e s u l t ( m _ b d d _ o r d e r 0) M a n t s ( A l w a y s c e l l _ o u t _ t i m e a b _ c o n ) i n a b _ r e g ; l e t A p r o p a g a t e _ p r o o f _ l i s t = m a k e _ p r o o f _ l i s t o n e _ p r o o f _ p r o p a g a t e A 0; l e t B p r o p a g a t e _ p r o o f _ l i s t = m a k e _ p r o o f _ l i s t o n e _ p r o o f _ p r o p a g a t e B 0; l e t c e l l _ p r o o f u v = e l (v+1) ( e l (u+1) c e l l _ p r o o f _ l i s t ) ; l e t A p r o p a g a t e _ p r o o f u v = e l (v+1) ( e l (u+1) A p r o p a g a t e _ p r o o f _ l i s t ) ; l e t B p r o p a g a t e _ p r o o f u v = e l (v+1) ( e l (u+1) B p r o p a g a t e _ p r o o f _ l i s t ) ; l e t em_thm = ( [ ] , [ ] , [ ] ) ; / / // T h e * _ p r o o f _ l i s t c o n t a i n s a l l t h e p r o o f s t h a t t h e i n d i v i d u a l // c o m p o n e n t s o f t h e h a r d w a r e w o r k c o r r e c t l y . T h e r e s t o f t h e // p r o o f shows t h a t when c o n n e c t e d t o g e t h e r t h e y p r o d u c e // t h e r i g h t m a t r i x m u l t i p l i c a t i o n r e s u l t l e t r e c I n s e r t A c t i v e T h e o r e m a d d f n ( { u : : i n t } , { v : : i n t } , { n e w _ t h m : : t h e o r e m } ) [] = [ ( u , [ ( v , a d d f n new_thm e m _ t h m ) ] ) ] A I n s e r t A c t i v e T h e o r e m a d d f n (u,v,new_thm) ( ( a u , a l i s t ) : b r e s t ) = l e t r e c P u t A c t i v e T h e o r e m l n ( { v : : i n t } , { n e w _ t h m : : t h e o r e m } ) [] = [ ( v , a d d f n new_thm em_thm)] A P u t A c t i v e T h e o r e m l n ( v , new_thm) ( ( a v , a v l i s t ) r v r e s t ) = v = a v => ( a v , a d d f n new_thm a v l i s t ) : v r e s t | ( a v , a v l i s t ) : ( P u t A c t i v e T h e o r e m l n ( v , new_thm) v r e s t ) i n u = a u => ( a u , P u t A c t i v e T h e o r e m l n ( v , new_thm) a l i s t ) : b r e s t | ( a u , a l i s t ) : ( I n s e r t A c t i v e T h e o r e m a d d f n ( u , v , n e w _ t h m ) b r e s t ) ; Appendix C. Program listing 248 l e t r e c R e t r i e v e T h e o r e m { u : : i n t } { v : : i n t } •[] = ( [ ] , [ ] , [ ] ) /\ R e t r i e v e T h e o r e m u v ( ( a u , a l i s t ) r b r e s t ) = l e t r e c G e t A c t i v e T h e o r e m v [] = ( [ ] , [ ] , [ ] ) A G e t A c t i v e T h e o r e m v ( ( a v , a v l i s t ) : v r e s t ) = v = a v => a v l i s t | G e t A c t i v e T h e o r e m v v r e s t i n u = a u => G e t A c t i v e T h e o r e m v a l i s t | R e t r i e v e T h e o r e m u v b r e s t ; l e t I n s e r t A c t i v e L i s t a d d _ f n t h m _ l i s t c u r r e n t = i t l i s t ( \ x A y . I n s e r t A c t i v e T h e o r e m a d d _ f n x y ) t h m _ l i s t c u r r e n t ; // V E R I F I C A T I O N CONDITION /// I n p u t s p e c i f i c a t i o n s l e t s e t l n p u t I n p N o d e {u :: i n t } {v :: i n t } { i : : i n t } { n _ v a l :: N} = l e t i n p u t = D u r i n g ( i * 2 * c l o c k _ t i m e , ( i + 1 ) * 2 * c l o c k _ t i m e - l ) ( I n p N o d e u v '= n _ v a l ) i n ( u , v . I d e n t i t y i n p u t ) ; l e t a l l = Nvar " a l l " l e t a l 2 = Nvar " a l 2 " l e t a l 3 = Nvar " a l 3 " l e t a l 4 = Nvar " a l 4 " l e t a21 = Nvar " a21" l e t a22 = Nvar "a22" l e t a23 = Nvar "a23" l e t a24 = Nvar "a24" l e t a31 = Nvar "a31" l e t a32 = Nvar "a32" l e t a33 = Nvar "a33" l e t a34 = Nvar "a34" l e t a41 = Nvar "a41" l e t a42 = Nvar "a42" l e t a43 = Nvar "a43" l e t a44 = Nvar "a44" l e t b l l = Nvar " b l l " l e t b l 2 = Nvar " b l 2 " l e t b l 3 = Nvar " b l 3 " l e t b l 4 = Nvar " b l 4 " l e t b21 = Nvar " b21" l e t b22 = Nvar "b22" l e t b23 = Nvar "b23" l e t b24 = Nvar "b24" l e t b31 = Nvar "b31" l e t b32 = Nvar "b32" l e t b33 = Nvar "b33" l e t b34 = Nvar "b34" l e t b41 = Nvar "b41" l e t b42 = Nvar "b42" l e t b43 = Nvar "b43" l e t b44 = Nvar "b44"; l e t t h e _ i n p u t s = // aO a l a2 a3 bO b l b2 b3 [ ([ '0, '0, '0, '0] , [ '0, '0, '0, '0] ) , //o ([ '0, '0, '0, '0] , [ '0, '0, '0, '0] ) , / / l ([ '0, '0, '0, '0] , [ '0, '0, '0, '0] ) , 7/2 ([ '0, '0, '0, '0] , [ '0, '0, '0, '0] ) , //3 ([ '0, a l l , '0, '0] , [ '0, b l l , '0, '0] ) , //4 ([ '0, '0, a21. '0] , [ '0, '0, b l 2 , '0] ) , //5 ( [ a l 2 , '0, '0, a31] , [b21, '0, '0, b l 3 ] ) , //6 ([ '0, a22, '0, '0] , [ '0, b22, '0, '0] ) , in (t '0, '0, a32, '0] , [ '0, '0, b23, '0] ) , //8 ([a23, '0, '0, a42] , [b32, '0, '0, b 2 4 ] ) , //9 ([ '0, a33, '0, '0] , [ '0, b33, '0, '0] ) , //10 ([ '0, '0, a43, '0] , [ '0, '0, b34, '0] ) , / / l l Appendix C. Program listing 249 ([a34, '0, '0, ([ '0, a44, '0, ([ '0, '0, '0, ([ '0, '0, '0, '0], [b43, '0, '0] , [ '0, b44, ' 0 ] , [ ' 0, , ' 0, '0], [ '0, '0, '0,- '0] ) , //12 '0, '0] ) , //13 •o,.. '0] ) , //14 '0, '0] ) ] •//15; // Output specifications let timeForOutputs = // 1 2 3 4 / / [ [ 6, 7, 8, 9] , // 1 [ 7, 9, 10, 11], // 2 [ 8, 10, 12, 13], // 3 [ 9, 11, 13, 15] // 4 ] ; . let outputFor row col = el col (el row timeForOutputs); let InputForCells = [ ] ; let addfirst x (a,b,c) = (x:a,b,c); let addsecond x (a,b,c) = (a,x:b,c); let addthird x (a,b,c) = (a,b,x:c); let InputAtStage n the_lists = val (avals, bvals) = el (n+1) the_inputs in let l e f t _ l i s t = map (\x.setlnput A {x::int} 0 n (el (x+1) avals)) (0 upto (array_depth-l)) in let r i g h t _ l i s t = map (\x.setlnput B 0 x n (el (x+1) bvals)) (0 upto (array_width-l)) in let down_list = (map (\x.setlnput IN_C (array_depth-l) {x::int} n ('0)) (0 upto (array_width-l)))@ (map (\x.setlnput IN_C x (array_width-l) {n::int} ('0)) (0 upto (array_depth-2))) in let resl = InsertActiveList addfirst l e f t _ l i s t the_lists in let res2 = InsertActiveList addsecond r i g h t _ l i s t resl in InsertActiveList addthird down_list res2; let start_step = InputAtStage 0 [] ; let this_step = start_step; let num_step = 0; let PropagateVal addfn row col okl {ok2::bool} res o l d _ l i s t = okl AND ok2 => InsertActiveTheorem addfn (row, col, res) o l d _ l i s t | o l d _ l i s t ; let PropagateRes row col a l l res res_l = Appendix C. Program listing 250 let c_index = "C"~(num2str(array_width-col-l+row)) in a l l AND (row*col = 0) => (c_index, res, (row, col)): res_l | res_l; letrec ProcessStageRow n {row::int} [] so_far = so_far A ProcessStageRow n row ((col, colthms):rest) (prop_list, res_l) = let make_step (a, b, c) = let ok a n = length a > n in let all_thms = (Identity(ClockAnt ((n+1)*2))):(a@b@c) in let ab_inps = (a@b) in let a l l = ok all_thms 3 in let curr_gen = a l l => Conjunct [cell_proof row col, Apropagate_proof row col, Bpropagate_proof row col] | length ab_inps = 2 => Conjunct [Apropagate_proof row col,Bpropagate_proof row col] | ok a 0 => Apropagate_proof row col | Bpropagate_proof row col in let curr_thm = Transform (TimeShift (2*n*clock_time)) curr_gen in let inps = Conjunct all_thms in let res = normaliseCon (GenTransThm inps curr_thm) in let new_l = PropagateVal addfirst row (col+1) (col<(array_width-l)) (ok a 0) res prop_list in let new_r = PropagateVal addsecond (row+1) col (row<(array_depth-l)) (ok b 0) res new_l in let new_d = PropagateVal addthird (row-1) (col-1) ((row*col) > 0) a l l res new_r in let new_rl= PropagateRes row col a l l res res_l in empty ab_inps => (prop_list, res_l) | (new_d, new_rl) in ProcessStageRow n row rest (make_step colthms); letrec ProcessStageProof n [] so_far = so_far /\ ProcessStageProof n ((row,rowthms):rest) so_far = let current = ProcessStageRow n row rowthms so_far in (print ("Doing row "~(int2str row)"nl)) seq (current catch current) seq ProcessStageProof n rest current; Appendix C. Program listing 251 let do_step n start_step = letrec perform m curr_step = let current = ProcessStageProof m (InputAtStage m curr_step) ([] , []) in (print ("Performing step ""(int2str m)~nl~nl)) seq (current catch current)'seq m = n => [snd current] | (snd current):(perform (m+1) (fst current)) in perform 0 start_step; • let output_list = do_step 15 []; // present results let ShowRes t r e s _ l i s t = el (t+1) r e s _ l i s t ; let Show t node = let res = ShowRes t output_list in find (\(x,y,a,b) . (x=node) AND ( (a* {b: : int}') = 0)) res; let OutputOfArray row col = let strip (Always r f) = f in val (a, th, b, c) = Show (outputFor row col) ("C"'(num2str(3+row-col))) in strip (con_of th) ; letrec PrintRowOutput row col = (col = array_width+l) => nl'nl | ("(""(int2str row)~" ,""(int2str col)~") :"~ (el2str (OutputOfArray row col))"nl) "(PrintRowOutput row (col+1)); letrec PrintOutput row = row = array_depth + 1 => nl I (PrintRowOutput row 1) " (PrintOutput (row+1)); Index =g>,71 = 0 , 71 C ,42 ET>,73 n,76 ^,47 abstract data type, 120 abstraction, 6, 38 antecedent, 70 assertions, 71, 83 £ , 4 8 B8ZS encoder, 143 BDD, 19 variable ordering, 20, 126 bilattice, 47 Binary decision diagrams, see BDD bottom element model, 42 truth domain, 48 branching time, 45 C, see circuit model carry-save adder, see CSA characteristic representation, 118 circuit model, 63 correctness, 89 satisfaction relation, 64 state space, 10 complete lattice, 7 composition of models, 223 compositional theory, 35, 185 inference rules TL, 92-104 TL„, 104-106 compositionality, 6, 35 completeness, 187 property, 91-113 structural, 222-230 conjunction rule, 94 consequence rules, 96 consequent, 70 CSA, 134, 143 defined,112 verification, 143 verification code, 240 A, A*, A f , see defining sequence set data representation, 120 De Morgan's Law Q.49 TL, 60, 207 defining pair, 53 defining sequence, 35 defining sequence set, 77-78 defining set, 53 defining trajectory, 35 defining trajectory set, 81 depth, 59 direct method, 131 disjunction rule, 95 domain information, 188 downward closed, 8 S, 85 FL, 116 future research, 186 generalised transitivity rule, 111 guard rule, 102 hand proof, 24 heuristic, 126 252 Index hidden weighted bit, 141, 239 identity rule, 93 IFIP WG10.5 benchmark matrix multiplier, 162 multiplier, 149 implication <2,50 TL, 57 inconsistency, 42, 70 inference rules implementation, 123-128 theory, 92-104 information ordering state space, 42, 183 truth domain, 48 integer, 120 interpretation, 61 interpretations representation, 117 join, 7 lattice, 7, 183 linear time, 45 logic quaternary, 184 definition, 47-51 motivation, 11 temporal, 21 mapping method, 134 matrix multiplier circuit, 162 meet, 7 min g, 72 modal logic, 21 model checking combined with theorem proving, 31 review, 27-31 model comparison, 23 model structure, 41 monotonic, 8 multipliers example verifications, 147 floating point, 148 IFIP WG10.5 benchmark, 149 other work, 160 verification code, 240 next state function, 43, 45-47 next state relation, 45^17, 187 non-determinism, 43, 187 notation sequences, 56 OBDDs, see BDD V, see power set parametric representation, 118 partial order, 7, 48, 183 partial order state space, 8 power set, 73 preorder, 7, 74 prototype, 15, 31, 123 Q, see logic,quaternary Q -predicate, 52 quaternary logic, see logic, quaternary TZ, see realisable states IZr, see trajectory, realisable range, 54 realisable fragment of TL„, 65, 89 states, 42-43 trajectory, 69 Sf, see trajectory satisfaction scalar, 56 symbolic, 62 simple, 54 specialisation definition, 102 discussion, 99-103 Index 254 heuristic, 127 rule, 103 state explosion problem, 5 state representation, 118, 183 strict dependence, 111 substitution, 100 substitution rule, 100 summary of results, 183-186 symbolic model checking, 30 symbolic simulation, 63 symbolic trajectory evaluation, 83-88 algorithm, 131, 132, 134 introduced, 11 original version, 33 T, T \ T f , see defining trajectory set temporal logic, 21 testing machines, 132 theorem prover, 25 combined with model checking, 31-33 combined with STE, 123 thesis goals and objectives, 12-15 outline, 15 time-shift heuristic, 127 time-shift rule, 93 TL algebraic laws, 60, 207 boolean subset, 85, 100 equivalence of formulas, 59 scalar, 52-59 semantics, 56, 62 symbolic, 60-63 syntax, 55, 61 T L n , 63—65 trajectory, 69 assertions, 35 _ defining trajectory set, 81 formulas, 34 minimal, 72 realisable, 69 trajectory evaluation algorithms, 128 transitivity rule, 98 truth ordering, 48 X,42 unrealisable behaviour, 42 until rule, 103 upward closed, 8 variable ordering, see BDD, variable order-ing variables, 60 verification condition, see assertions Voss, 31, 116 U,42 Y , 43 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0051407/manifest

Comment

Related Items