Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Compositional model checking of partially ordered state spaces Hazelhurst, Scott 1996

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1996-090930.pdf [ 9.96MB ]
Metadata
JSON: 831-1.0051407.json
JSON-LD: 831-1.0051407-ld.json
RDF/XML (Pretty): 831-1.0051407-rdf.xml
RDF/JSON: 831-1.0051407-rdf.json
Turtle: 831-1.0051407-turtle.txt
N-Triples: 831-1.0051407-rdf-ntriples.txt
Original Record: 831-1.0051407-source.json
Full Text
831-1.0051407-fulltext.txt
Citation
831-1.0051407.ris

Full Text

COMPOSITIONAL M O D E L C H E C K I N G OF PARTIALLY ORDERED STATE SPACES by Scott Hazelhurst B.Sc.Hons, University of the Witwatersrand, Johannesburg M.Sc, University of the Witwatersrand, Johannesburg  A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY  in THE FACULTY OF GRADUATE STUDIES  (Department of Computer Science) We accept this thesis as conforming to the required standard  THE UNIVERSITY OF BRITISH COLUMBIA  January 1996 © Scott Hazelhurst, 1996  In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission.  Computer Science The University of British Columbia 2366 Main Mall Vancouver, Canada V6T 1Z4  Date:  Abstract  Symbolic trajectory evaluation (STE) — a model checking technique based on partial order representations of state spaces — has been shown to be an effective model checking technique for large circuit models. However, the temporal logic that it supports is restricted, and as with all verification techniques has significant performance limitations. The demand for verifying larger circuits, and the need for greater expressiveness requires that both these problems be examined. The thesis develops a suitable logical framework for model checking partially ordered state spaces: the temporal logic TL and its associated satisfaction relations, based on the quaternary logic Q. TL is appropriate for expressing the truth of propositions about partially ordered state spaces, and has suitable technical properties that allow STE to support a richer temporal logic. Using this framework, verification conditions called assertions are defined, a generalised version of STE is developed, and three STE-based algorithms are proposed for investigation. Advantages of this style of proof include: models of time are incorporated; circuits can be described at a low level; and correctness properties are expressed at a relatively high level. A primary contribution of the thesis is the development of a compositional theory for TL assertions. This compositional theory is supported by the partial order representation of state space. To show the practical use of the compositional theory, two prototype verification systems were constructed, integrating theorem proving and STE. Data is manipulated efficiently by using binary decision diagrams as well as symbolic data representation methods. Simple heuristics and a flexible interface reduce the human cost of verification. Experiments were undertaken using these prototypes, including verifying two circuits from the IFIP WG 10.5 Benchmark suite. These experiments showed that the generalised STE algorithms were effective, and that through the use of the compositional theory it is possible to verify very large circuits completely, including detailed timing properties. ii  Table of Contents  Abstract  ii  List of Figures  vii  List of Tables  viii  List of Definitions, Theorems and Lemmas  ix  Acknowledgement  xii  Dedication  xiii  1  Introduction 1.1 Motivation 1.2 Verification and the Use of Formal Methods 1.3 Partially-ordered State Spaces 1.3.1 Mathematical Definitions 1.3.2 Using Partial Orders 1.3.3 Symbolic Trajectory Evaluation 1.4 Research Contributions 1.5 Outline of Thesis  1 1 4 6 7 8 11 12 15  2  Issues in Verification 2.1 Binary Decision Diagrams 2.2 Styles of Verification 2.2.1 Property Checking 2.2.2 Modal and Temporal Logics 2.2.3 Model Comparison 2.3 Proof Techniques 2.3.1 Theorem Proving 2.3.2 Automatic Equivalence and Other Testing 2.3.3 Model Checking 2.4 Symbolic Trajectory Evaluation 2.4.1 Trajectory formulas 2.5 Compositional Reasoning 2.6 Abstraction  18 19 20 21 21 23 24 25 27 27 33 33 35 38  iii  2.7  Discussion  38  3  The 3.1 3.2 3.3  Temporal Logic TL The Model Structure The Quaternary Logic Q An Extended Temporal Logic 3.3.1 Scalar Version of TL 3.3.2 Some Laws of TL 3.3.3 Symbolic Version 3.4 Circuit Models as State Spaces 3.5 Alternative Definition of Semantics  41 41 47 52 52 59 60 63 67  4  Symbolic Trajectory Evaluation 4.1 Verification with Symbolic Trajectory Evaluation 4.2 Minimal Sequences and Verification 4.3 Scalar Trajectory Evaluation 4.3.1 Examples 4.3.2 Defining Trajectory Sets 4.4 Symbolic Trajectory Evaluation 4.4.1 Preliminaries 4.4.2 Symbolic Defining Sequence Sets 4.4.3 Circuit Models  69 70 72 75 79 81 83 84 87 89  5  A Compositional Theory for TL 5.1 Motivation 5.2 Compositional Rules for the Logic 5.2.1 Identity Rule 5.2.2 Time-shift Rule 5.2.3 ConjunctionRule 5.2.4 Disjunction Rule 5.2.5 Rules of Consequence 5.2.6 Transitivity 5.2.7 Specialisation 5.2.8 Until Rule 5.3 Compositional Rules for TL„ 5.4 Practical Considerations 5.4.1 Determining the Ordering Relation: is A}( ) 5.4.2 Restriction to TL„ 5.5 Summary  91 91 92 93 93 94 95 96 98 99 103 104 106 106 110 113  G  iv  \Z  V  A'(/i)?  6  Developing a Practical Tool 6.1 The Voss System 6.2 Data Representation 6.3 Combining STE and Theorem Proving 6.4 Extending Trajectory Evaluation Algorithms 6.4.1 Restrictions 6.4.2 Direct Method 6.4.3 Using Testing Machines 6.4.4 Using Mapping Information  7  Examples 7.1 Simple Example 7.1.1 Simple Example 1 7.1.2 Hidden Weighted Bit 7.1.3 Carry-save Adder 7.2 B8ZS Encoder/Decoder 7.2.1 Description of Circuit 7.2.2 Verification 7.3 Multipliers 7.3.1 Preliminary Work 7.3.2 IEEE Floating Point Multiplier 7.3.3 IFIPWG10.5 Benchmark Example 7.3.4 Other Multiplier Verification 7.4 Matrix Multiplier 7.4.1 Specification 7.4.2 Implementation 7.4.3 Verification 7.4.4 Analysis and Comments 7.5 Single Pulser 7.5.1 The Problem 7.5.2 An Example Composite Compositional Rule 7.5.3 Application to Single Pulser 7.6 Evaluation  8  Conclusion 8.1 Summary of Research Findings 8.1.1 Lattice-based Models and the Quaternary logic Q 8.1.2 The Temporal Logic TL 8.1.3 Symbolic Trajectory Evaluation 8.1.4 Compositional Theory 8.2 Future Research v  115 116 120 123 128 128 131 132 134 138 139 139 141 143 143 144 .144 147 147 148 149 160 162 162 164 168 175 177 177 178 179 180 183 183 183 184 185 185 186  8.2.1 8.2.2 8.2.3 8.2.4 8.2.5  Non-determinism Completeness and Model Synthesis Improving STE Algorithms Other Model Checking Algorithms Tool Development  187 187 188 189 190  Bibliography  192  A Proofs  202  A. 1 Proof of Properties of T L A. 1.1 Proof of Lemma 3.3 A.1.2 Proof of Theorem 3.5 A.1.3 Proof of Lemma 3.6 A.1.4 Proof of Lemma 3.7 A.2 Proofs of Properties of STE A.2.1 Proof of Lemma 4.4 A. 2.2 Proof of Theorem 4.5 A. 3 Proofs of Compositional Rules for T L  n  B Detail of testing machines B. l  B.2 B.3 B. 4  222  Structural Composition B. l . l Composition of Models B.1.2 Composition of Circuit Models Mathematical Preliminaries for Testing Machines Building Blocks Model Checking  C Program listing Cl C. 2 C.3 C.4 C.5  FL FL FL FL FL  202 202 203 207 208 209 215 216 217  222 223 226 231 231 234  238  Code for Simple Example 1 Code for Hidden Weighted Bit Code for Carry-Save Adder Code for Multiplier , , Code for Matrix Multiplier Proof  Index  238 239 240 240 243  252  vi  List of Figures  1.1 1.2  Example Lattice State Space The Partial Order for C  9 10  3.1 3.2 3.3 3.4 3.5 3.6 3.7  Inverter Circuit Inverter Model Structure — Flat State Space Lattice-based Model Structure Non-deterministic state relation Lattice State Space and Transition Function The Bilattice Q Definition of g and h  43 44 44 46 46 48 53  4.1  ThePreorder \Z  74  5.1 5.2  Example Two Cascaded Carry-Save Adders  92 114  6.1 6.2 6.3  An FL Data Type Representing Integers Data Representation A CSA Adder  121 121 135  7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10  Simple Example 1 Circuit for the 8-bit Hidden Weighted Bit Problem B8ZS Encoder Base Module for Multiplier Schematic of Multiplier Black Box View of 2Syst Cell Representation Implementation of Cell Systolic Array Single Pulser  140 141 145 150 152 163 166 167 168 177  B.l  BB {g, 3): a Three Delay-Slot Combiner  232  B.2  BB (g,4)  234  V  A  B  vii  List of Tables  3.1  Conjunction, Disjunction and Negation Operators for Q  5.2  Summary of TL„ Inference Rules  107  7.3 7.4 7.5 7.6 7.7 7.8  CSA Verification: Experimental Results Benchmark 17: Correspondence Between Integer and Bit Nodes Verification Times for Benchmark 17 Multiplier Inputs for the 2Syst Circuit Outputs ofthe2Syst Circuit Benchmark 22: Actual Output Times  143 154 159 164 165 175  viii  49  List of Important Definitions, Theorems and Lemmas  Definition 3.1 Definition 3.2 Definition 3.3 Definition 3.6 Definition 3.7 Definition 3.8 Definition 3.10 Definition 3.11 Definition 3.12 Definition 3.13 Lemma 3.1 Lemma 3.2 Lemma 3.3 Lemma 3.4 Theorem 3.5 Lemma 3.7  — 53 53 54 55 56 57 61 62 64 67 50 51 54 58 59 65  Definition 4.1 Definition 4.2 Definition 4.4 Definition 4.7 Definition 4.9 Definition 4.12 Definition 4.14 Definition 4.15 Definition 4.18 Definition 4.19 Theorem 4.2 Lemma 4.3 Lemma 4.4 Theorem 4.5 Lemma 4.8  71 71 72 78 82 84 85 85 88 88 74 79 82 83 89  Lemma 5.2 Theorem 5.3 Theorem 5.4  93 94 94 ix  Theorem 5.5 Lemma 5.6 Theorem 5.7 Theorem 5.8 Lemma 5.9 Lemma 5.10 Theorem 5.11 Theorem 5.12 Corollary 5.13 Theorem 5.14 Theorem 5.15 Theorem 5.16 Theorem 5.17 Theorem 5.18 Theorem 5.19 Theorem 5.20 Theorem 5.21 Lemma 5.22 Corollary 5.23 Lemma 5.24 Lemma 5.27 Lemma 5.28 Theorem 5.31  95 96 97 98 101 102 103 103 103 105 105 105 105 105 105 105 105 105 106 107 108 108 Ill  Lemma A.6 LemmaA.ll Theorem A. 12 Theorem A. 13 Theorem A. 14 Lemma A. 15 Theorem A. 16 Theorem A. 17 Lemma A. 18 Lemma A. 19 Theorem A.20 Theorem A.21  210 217 217 217 218 218 219 219 220 220 221 221  Definition B . l Lemma B . l Lemma B.2  223 224 225  x  Lemma B.3  229  xi  Acknowledgement  'A person  is a person  through  other  people.'  The person I asked to proofread this said it was too sentimental. Perhaps it is, but the experience and time spent on this thesis has been a very important part of my life, and it is important for me to thank some of the many people who have contributed to this experience. The Computer Science Department and the Integrated Systems Laboratory at UBC have been a wonderful place to work. First, I must thank my supervisor, Carl Seger. Carl has supported and mentored me since I arrived at UBC, and I am very grateful for all that he has done, well beyond what I could have expected. Carl, I have learned a tremendous amount from you. I also thank the other members of my supervisory committee: Paul Gilmore, Nick Pippenger, Son Vuong, and particularly Mark Greenstreet who has provided enthusiasm, ideas, and support and given freely of his time. Thanks to all members of the ISD group, especially Mark Aaagaard, Nancy Day, and Mike Donat, Catherine Leung, Helene Wong and most especially Andy Martin. There are many others in the department who have been important too. My office-mates, Carol Saunders, Catherine Leung, Nana Kender, and Rita Dilek, have been supportive. Helene Wong, Marcelo Walter, Peter Smith, Jim Boritz and Alistair Veitch have all made being here a richer experience. The administrative and technical staff and management of the department have provided an environment that was a pleasure to work in. Financial support from UBC, NSERC and ASI made my research possible. For the last two years, Green College has not only been a stimulating academic environment but also a great community to be part of, and I am very fortunate to have had this experience. There are many College members who have helped me over the years. There are two people whose friendship I shall always count as major accomplishments at UBC. I am very grateful to Patricia Yuen-Wan Lin for sharing the journey and giving me constant encouragement. Peter Urmetzer has been a very good friend throughout. Thank you for all you have done. Many others helped get me here and have supported me since, and I would like to thank all my friends. I wish to particularly mention Saul Johnson and Annette Wozniak (and Kiah); Conrad Mueller, Ian Sanders, Philip Machanick and all the other Turing Tipplers; Sheila Rock; Stan Szpakowicz; Anthony Low; Georgia and Hugh Humphries. Finally, my greatest thanks go to my family, my parents, David and Ethel, and sister, Jo Ann. The encouragement and support you have always given me, in many different ways over a long period of time, has always encouraged and sustained me. This experience and what I have learned here is due to your efforts.  xii  To my parents and the memory of Maurice Yatt and Edith Hazelhurst  Woza Moya omuhle Sibambane ngezandla sibemunye — Work For All, Juluka, 1983  xiii  Chapter 1 Introduction  1.1  Motivation  As computers become ubiquitous in our society, as more parts of our global society are affected directly and indirectly by computers, the need to ensure their safe and correct behaviour increases. The hyperbole encountered in the media tends to make people blase about the importance of computers and undervalue the revolutionary effect that computers have had. But, as our dependency on computers increases, so does the complexity of computer systems, making it more difficult to design and build correct systems at the same time as it becomes more important to do so. What we can do 'sort o f right far exceeds what we can do properly. As a scientific and engineering discipline computer science is intimately concerned about making predictions about and knowing the properties of computer systems, and it is here that mathematics and the application of methods of formal mathematics is critical. Traditional methods of ensuring correct operation of software and hardware are often not able to provide a sufficiently high degree of confidence of correctness. Methods such as testing and simulation of systems cannot hope to provide anywhere near exhaustive coverage of system behaviour, and while sophisticated test generation techniques exist, the sheer size of systems makes testing more and more difficult and expensive. Verification — a mathematical proof of the correctness of a design or implementation — uses formal methods to obviate these problems. Questions of verification have been at the heart of computer science since the work of Turing and others [124], and the fundamental limits of  1  Chapter  1.  Introduction  2  computation (questions such as computability, tractability and completeness) are of immense consequence when discussing the theoretical and practical limitations of verification. The theoretical importance of verification is reflected in the practical consequences of verification, or lack thereof, which has been illustrated recently by the extremely well-publicised error in a commercial microprocessor [64, 104, 110]. This is not to suggest that formal methods are a panacea, and that other approaches are unimportant. Indeed, in many safety critical or other important applications, there may be social and ethical constraints on what should be built. There are many technical and non-technical factors that will affect the quality of systems that are built. Testing at different levels will continue to be important. Moreover, there are limitations on what verification can offer. With respect to hardware verification, Cohn points out that neither the actual hardware implementation nor the intentions motivating the device can be subject to formal methods [42]. Verification is inherently limited by the models used. And, verification is expensive computationally and requires a high level of expertise. Although there has been some success in the use of formal methods, there are a number of practical and organisational problems that must be dealt with, especially when formal methods are first used by an organisation [114, 119]. Over a quarter of a century ago, C.A.R. Hoare summed up his view of the use of formal methods [82]: The practice of supplying proofs for nontrivial programs will not become widespread until considerably more powerful proof techniques become available, and even then will not be easy. But the practical advantages of program proving will eventually outweigh the difficulties, in view of the increasing costs of programming error. At present, the method which a programmer uses to convince himself of the correctness of his program is to try it out in particular cases and to modify it if the results produced do not respond to his intentions. After he has found a reasonably wide variety of example cases on which the program seems to work, he believes that it will always work. The time spent in this program testing is often more than half the  Chapter  1.  3  Introduction  time spent on the entire programming project; and with a realistic costing of machine time, two thirds (or more) of the cost of the project is involved in removing errors during this phase. The cost of removing errors discovered after a program has gone into use is often greater, particularly in the case of items of computer manufacturer's software for which a large part of the expense is borne by the user. And finally the cost of error in certain types of program may be almost incalculable — a lost spacecraft, a collapsed building, a crashed aeroplane, or a world war. Thus, the practice of program proving is not only a theoretical pursuit, followed in the interests of academic respectability, but a serious recommendation for the reduction of costs associated with programming error. As a manifesto for verification, with minor changes it might well have been written today. On the surface, re-reading this may seem to be cause for pessimism — what has changed in 25 years? However, this is misleading. Verification is very difficult and can be extremely expensive ; this complexity, lack of expertise, and conservatism are problems in the greater adoption 1  of formal methods. But, the cost of not performing verification can be much higher , and as 2  will be seen in Chapter 2, there have been significant theoretical and practical advances showing that the promise of advantages from formal methods has been realised. The progress that has been made, the increased needs for the use of verification, and the challenges which these needs create, make the comments expressed in this extract more relevant today than it was in 1969: we need more powerful proof techniques, and techniques that are easier to use. The rest of this chapter is structured as follows. Section 1.2 introduces the use of verification and formal methods. Section 1.3 motivates and describes the underlying approach to verification adopted in this thesis. Section 1.4 describes the research contribution of the thesis, and Section 1.5 outlines the rest of this thesis. Owre et al. estimate that the cost of a partial formal specification and verification of a commercial, 500 000 transistor microprocessor as 'three man-years' of work [105]. Intel estimate the cost of the flaw in the Pentium microprocessor at US$475-million [65]. 1  2  4  Chapter 1. Introduction  1.2  Verification and the Use of Formal Methods  Consider an example of a chip which divides two 64-bit numbers. There are 2  128  possible com-  binations of input. Exhaustive testing of all these combinations is an impossible feat — even if we were to test 10 combinations a nano-second for a million millenia we would be able to 9  test fewer than one per cent of cases. Moreover, this testing would ignore the possible effects of internal state of the chip (it could be that the chip works correctly when initialised, but that the effect of computing some answers updates internal registers so that subsequent computations are incorrect). This example illustrates the underlying problem in checking for correctness. The number of behaviours of a system, particularly if it is reactive or concurrent, is very large. Not only does this make exhaustive testing impractical, it makes reasoning about computer systems, whether software or hardware, difficult. Since testing often cannot be comprehensive, verification is appealing in giving a higher confidence in the correctness of systems. The use of formal methods allows a mathematical proof to be given of correctness. Of course, we can only verify what can be modelled mathematically. The verification of the correctness of a chip is the verification of its logical design. We have some mathematical model of the behaviour of the components (gates or transistors) and use this to infer properties of the system. Such a verification is only as good as the model of the components. Models like this must make simplifications about the physical world. While often the simplifications made do not affect our ability to make predictions about the behaviour of the world, it is important to realise the potential problem. The question of how good the model of the world is, and the problem of realising a logical design as a physical artifact are critical problems. However, they are beyond the scope of this thesis. Focussing on the problem of verifying a logical design is difficult enough, and this will be the focus of this research: this section introduces verification and some of the research problems  Chapter  1.  5  Introduction  associated with verification, and Chapter 2 will give a fuller survey of verification. Verification requires that both the specification and the implementation be described using some mathematical notation with a well-defined formal semantics. There are many choices open to the verifier. Common choices for describing an implementation arefinitestate machines or labelled transition systems — often these are extracted directly from higher level descriptions such as programs. A common choice for the specification is a temporal logic, which allows the description of the intended behaviour of a system over time. If the implementation is described as afinitestate machine and the specification as a set of temporal formulas, verification consists of showing that the finite state machine satisfies these formulas. The fundamental problem with verification is that the number of states in a model of a system is exponentially related to the number of system components; this is known as the  explosion problem.  state  Finding automatic verification techniques is difficult; the general versions  of the problem are undecidable [124] and restricted versions remain undecidable, while others are NP-hard [55]. Many verification approaches have been suggested — these will be surveyed in the next chapter. The problems caused by large state spaces manifest themselves in different ways, as can be seen with two of the most popular methods, theorem proving and automatic model checking. A large state space imposes significant computational costs on the verification task. This is a particular problem for automatic model checking techniques, which are based on state exploration methods. Although theorem provers may be less sensitive to the size of the state space in terms of their computational cost, the cost of human intervention is high, often requiring a high degree of expertise and making the verification more difficult and much more lengthy. Dealing with the state explosion problem motivates much research in verification, and a number of methods to limit the problem have been suggested. Some of the methods examined in this research are:  Chapter  1.  Introduction  6  • The use of good data structures to represent model behaviour is critical. The development and use of Ordered Binary Decision Diagrams in the 1980s was very important in extending the power of automatic verification methods. • Abstraction. By constructing an abstraction of the model, and proving properties of the abstraction rather than the model, significant performance benefits may be gained. Of course the problem offindingthe abstraction, and showing that the properties proved of the abstraction are meaningful of the model are non-trivial. • Compositionality. Divide and conquer is one of the most common strategies in computer science, and one which can be very helpful with verification. Property decomposition is useful when the cost of verification is highly sensitive to the complexity of the properties to be proved; it provides a way of combining 'smaller' results into 'larger' ones. Structural decomposition allows different parts of the system to be reasoned about separately; these separate results are then used to deduce properties of the entire system. • Hybrid approaches. Different verification techniques have different advantages and disadvantages, so by combining different approaches it might be possible to overcome the individual disadvantages. The choice of model of the system is critical. This choice affects the way in which properties are proved, what satisfaction means, and how abstraction and compositionality can be used. The next section motivates and describes the method of representing state space and model behaviour adopted by this thesis. 1.3  Partially-ordered State Spaces  One of the starting points of this thesis is that partially-ordered sets are effective representations of state spaces of systems. This section introduces the necessary mathematical definitions,  Chapter 1. Introduction  7  motivates why partial orders are useful representations, describes how they are used, and then introduces an appropriate verification method. 1.3.1  Mathematical Definitions  A partial order, S  R,  on a set S is a reflexive, anti-symmetric and transitive relation on S, i.e.  R  C  x S and Vs G 5,(s,s) € R  Typically, an infix notation is used for partial orders. Thus, if rz is a partial order, then x Q y is used for ( x ,  y)  €  rz . A preorder on 5 is a reflexive and transitive relation.  If rz is a partial order on S, then it can be extended to cross-products of S and sequences of <S. If ( s i , . . . ,s„),  1,... , n.  . ..,<„)  G  S", then ( s . . . l5  Similarly for sequences (elements of S"),  ,s ) n  S1S2S3  [Z  . . . ,< )  ... C  ^i^2^3  N  if s- C ^, for i = t  • • • if s,- E  for i =  1,2,...  If <S* is a set with partial order [Z , and s,t £ S, then u is the /easr upper bound, or join, of 5 and i if s, t C u (i.e. it is an upper bound) and if s, t FZ u, then u FZ u (i.e. it is no larger than any other upper bound). In this thesis, the join of s and t will be denoted s U i . In general, it is not the case that every pair of elements in a partially ordered set has a join — a pair of elements could have many least upper bounds, each of which is incommensurable with the others, or no least upper bound at all. Similarly, the greatest lower bound of s and t, or the meet of s and t, is denoted sHt, and in general not all pairs of elements will have a meet. A partially ordered set S is a lattice if every pair of elements has a meet and join. By induction, in any lattice any finite subset has a least upper bound and a greatest lower bound. S is a complete lattice if every set of elements — finite or infinite — has a least upper bound and  Chapter 1. Introduction  8  greatest lower bound. In particular, complete lattices have unique universal upper and lower bounds. Note that all finite lattices are complete — this is a result that is used extensively in this thesis. If S is a complete lattice over the partial order C , then, under the natural extensions of C : • afinitecross-product of S, S is a complete lattice; and n  • S", the set of all sequences of S is a complete lattice. If <Si and S are two lattices with partial orders < i and < , and g: S\ -> S is a function, 2  2  2  then g is monotonic with respect to <i and < if ^ <i t implies that g(s) < g(i). 2  2  If <S is a lattice and A C S, then A is upward closed if a E A , x G <S* and a • a; implies that i G A Similarly, A is downward closed if a E A , x £ S and £ C a implies that x £ A . Partial orders are used in two important ways in this thesis. First, given a state space, partial orders are used to compare the information content of states. sC.t implies that s has less information than t; if s Q t and sQu, then informally we can think of s as representing both t and u, it is an abstraction of these two states. It is fairly easy to generate partial order models of systems like circuits from gate level descriptions of circuits, and good partial-order models from switch-level can automatically be extracted in many cases. The second way partial orders are used is to differentiate between levels of truth, a central theme in this thesis. 1.3.2  Using Partial Orders  Formally, a model can be described by ((S, C ), Y), where S is a complete lattice under the partial order C and the behaviour of the model is represented by the next-state function Y : S —t S which is monotonic with respect to the partial order. The partial order can be extended to sequences of <S. To see why partial orders might be useful, consider as an example of a system which can be in one of five states. A next state function Y describes the behaviour of the system. The state  9  Chapter 1. Introduction  space could be represented by a set containing five elements. However, there is an advantage in representing the state space with a more sophisticated mathematical structure. In this example, we represent the state space with the lattice shown in Figure 1.1 (note that this is just one possible lattice). States s -s 4  8  are the 'real' states of the system, and the other states are mathematical  abstractions (Y can be extended to operate on all states of the lattice). The partial ordering of the lattice is an information ordering: the higher up in the ordering we are, the more we know about which state the system is in. For example, the model being in state s i corresponds to the system being in state s or s . State s represents a state that has contradictory information. 5  4  9  so  Figure 1.1: Example Lattice State Space  States like si are useful because if one can prove that a property holds of state si, then (given the right logical framework) that property also holds of s and s . There can be a great perfor4  5  mance advantage in proving properties of states low in the lattice. Furthermore, state s plays an important role, since it represents states about which incon9  sistent information is known. Although such states do not occur in 'reality', they are sometimes artifacts of a verification process. A human verifier may introduce conditions which are inconsistent with each other or the operation of the real system. These conditions could lead to worthless verification results — ones that while mathematically valid tell us nothing about the behaviour of the system and may give  Chapter 1. Introduction  10  verifiers a false sense of security. Since it may not be possible to detect these inconsistencies 3  directly, it is useful to have states in which inconsistent properties can hold at the same time. In such states, a property and its negation may both hold, and we should have a way of expressing this. In this example, the potential savings are not large, but for circuit models extremely significant savings can be made. The state space for a circuit model represents the values that the nodes in the circuit take on, and the next state function can be represented implicitly by symbolic simulation of the circuit. The nodes in a circuit take on high (H) and low (L) voltage values; there is a natural lattice in which these voltage values can be embedded. It is useful, both computationally and mathematically, to allow nodes to take on unknown (U) and inconsistent or over-defined (Z) values. The set C = {U, L, H, Z} forms a lattice, the partial order given in Figure 1.2. Z H / U Figure 1.2: The Partial Order for C  The state space for a circuit then is naturally represented by C , which is a complete lattice. n  Consider a circuit with n components and a state, s, of the circuit: s = (v ... u  , u , U , . . . , U), OT  n—m  where the u,-s are boolean values. With the right logical framework, if we can prove that a property g holds of the state s, then we can infer directly that the property holds for all states above it in the information ordering. 3  'A truth that's told with bad intent,/ Beats all the lies you can invent.'— William Blake  11  Chapter 1. Introduction  If we only consider the subset of states, {L, H}" (those states with known, consistent voltages on each component), there are 2  n _ m  states above  ViS are boolean variables. So, in one step 2  s,  of the form (vi,...  ,v ), n  where the  'interesting' proofs are done (this step would  n _ m  also prove properties about states with partial or inconsistent information). Through the judicious use of U values, the number of boolean variables needed to describe the behaviour of the circuit can be minimised, increasing the size of the circuits that can be dealt with directly. The purpose of model checking is to determine whether a model has a certain property — ideally, a verification method should answer this 'yes' or 'no'. Unfortunately, the performance benefit gained by using only partial information compromises this goal. In the example above, while every property of the circuit will be true or false of states s -s , 4  ties which are neither true nor false of states s -s , 0  3  8  there will be some proper-  since there is insufficient information about  those states. The converse problem exists with a state like s . Assigning the same level of credibility 9  and meaningfulness to the truth of property g in state s as the truth of g in s violates common 9  5  sense understanding of truth. Both these factors indicate that a two valued logic has insufficient expressiveness when dealing with a partially-ordered state space. To say that something is true or false in states like si and s may be very misleading. And, we shall see later that a two valued logic also has a serious 9  technical defect in this situation. 1.3.3  Symbolic Trajectory Evaluation  Symbolic trajectory evaluation (STE) is a model checking approach based on partially ordered state spaces. STE computes the next state relation using symbolic simulation. Not only does this allow the partially-ordered state space structure to be exploited effectively, it supports accurate, low-level models of circuit structures. ('Approach' is emphasised above because a number  Chapter  1.  Introduction  12  of different possible STE-based algorithms and implementations exist. Moreover, STE-based algorithms can be used in different logical frameworks.) Previous work with STE has shown that it is an effective method for many circuits (e.g., see [8, 47]) and it is recognised as one of the few methods with good asymptotic performance on a large class of non-trivial circuits [26]. STE is particularly useful in dealing with large circuits, where the circuit is modelled at a low level (gate or switch level), and where timing is important. Higher-level verifications are important too, but, as Cohn points out, realistic and detailed models of circuits are important to ensure that the mathematical results proved are meaningful [42]. Although successfully applied, these STE-based approaches are not without their problems. First, the underlying problem of the state explosion problem still exists, and as with all verification methods, better and more powerful techniques must be developed as the computation bottle-necks are still there. Second, in existing STE-based approaches, the logic used to express properties is limited; for example, disjunction and negation is not fully supported. While the logic is expressive enough for many problems and the restricted form of the logic leads to very efficient model checking algorithms, there are problems which need a richer logic. Third, previous approaches have used a two valued logic, which, in the context of partially ordered state spaces, is confusing. For a restricted logic, the complication caused by insufficient and contradictory information can be dealt with adequately in an extra-logical way; this is inadequate for a richer logic.  1.4  Research Contributions  No one verification method is suitable for all verification problems. The choice of model (how complete and what level), how correctness is expressed, and the choice of underlying theoretical framework and practical tool depends on many factors: the problem domain; what properties  Chapter  1.  13  Introduction  the verifier wishes to prove; the expertise of those involved; what level of confidence in the verification has to be obtained; and very importantly the computational and human resources available. The research work presented in this thesis is motivated by the promise that trajectory evaluation offers in dealing with large circuits, especially where a detailed model of timing is required. The strength of this is that not only can the high-level algorithmic descriptions of functionality be verified, but the low-level implementation details can be checked too; verification can be done on switch-level or detailed gate level circuit descriptions. This is particularly important when the transformation from high-level description to low-level implementation is errorprone. Timing properties can be verified at the micro-level (e.g. checking that circuit values stabilise by the end of a clock cycle) or at the macro-level (e.g. checking which clock cycle something happens). Since many other verification methodologies have difficulty with detailed verification of large circuits, this is an important line of research to develop. This thesis starts from the premiss that extending the power of STE-based methods by increasing the range and size of systems that can be verified, and the types of properties that can be expressed in a verification is a significant contribution. The goal of this research is to show that the applicability of symbolic trajectory evaluation can be significantly extended though the development of an appropriate temporal logic for model checking partially-ordered state spaces, and the use of a compositional theory for trajectory evaluation. The specific contributions of this thesis are listed below. • Proposing  a suitable  temporal  logic suitable for partially  ordered state  spaces.  Traditional two-valued logics are unsuitable for expressing properties of partially-ordered state spaces; myfirstthesis is that a four-valued logic is suitable. This logic distinguishes the following four cases: true, false, under-determined, and over-determined. Not only does the four-valued logic provide a framework for representing our knowledge of the  Chapter  1. Introduction  14  degree of truth of a proposition, it is a suitable technical framework. This framework is useful not only for STE-based model checking algorithms, but other verification methods based on partially-ordered state spaces developed in the future. A qualification is in order — the use of uncertainty to model both state information and degrees of truth is epistemological rather than ontological in nature. The question of uncertainty in the 'real world' is well outside of the scope of this thesis. Uncertainty is used in system models because this offers significant computational advantages. This uncertainty in the model induces an uncertainty in our knowledge of the model. Thus the fourvalued logic is useful to reason about our knowledge of the model, and the use of the four valued logic is proposed for its utilitarian value, not as an excursion into general philosophy. • Generalisation  of symbolic  trajectory  evaluation  based  algorithms  My second thesis is that using the four-valued framework, STE-based algorithms can be generalised to support a richer logic. Providing a richer logic is important because it supports the verification of a greater range of applications. Moreover, it often makes the specification of properties clearer, which makes the verification more meaningful for the user; and more elegant specifications can also lead to more efficient model checking. • A Compositional  Theory  My third thesis is that a compositional theory for model checking partially-ordered state spaces can be developed, and provides a foundation for overcoming the performance limitations of model checking. The compositional theory allows verification results to be combined in different ways into larger results. The structure of the state space lends itself to the compositional theory, and together with the compositional theory allows very large state spaces to be model checked. The key part of the development of the compositional theory is to show that it is sound; that all results inferred could, at least in principle, be  Chapter  1.  15  Introduction  directly obtained through trajectory evaluation or some other model checking algorithm. • Development  of a practical  tool  While the proposed four-valued logic and compositional theory have theoretical interest, a major part of the significance of the contribution of the work comes from my fourth thesis: that generalised STE and the compositional theory for model checking partially ordered state spaces using a four-valued logic can be used effectively, making a significant contribution to the size and complexity of circuits that can be verified. This is demonstrated by the development of prototype verification tools. These prototypes show that it is effective to combine theorem proving and STE-based model-checking as these approaches complement each other. While the prototypes are not of interest in themselves, they demonstrate that very large circuits can be formally verified using the approaches advocated here. The prototypes are also of interest because of the lessons they provide about tool-making.  1.5  Outline of Thesis  The rest of the thesis is structured as follows. Chapter 2 gives a brief overview of verification, and then reviews related literature. This raises the important issues and problems of verification, motivates choices made in this research, and places the research into context. Chapter 3 presents the four-valued logic Q, and the temporal logic, TL, based on Q. After defining the logics, the issue of satisfaction — what it means to say that a certain property holds of a state or sequence of states — is explored and different alternatives given. The theory of generalised symbolic trajectory evaluation is given in Chapter 4 using the theory presented in Chapter 3. Although the theory of trajectory evaluation is general, at this stage its major application area is circuit verification.  Chapter  1.  Introduction  16  Chapter 5 develops the theory of composition for the verification of partially-ordered state spaces. The compositional inference rules are explained, and shown to be sound. Simple examples are given. The compositional theory is very important in increasing the range of systems that can be verified using trajectory evaluation. Chapter 6 ties the preceding chapters together and shows how the theory can be practically implemented. Issues of data and state representation are discussed, practical model checking algorithms based on STE outlined, as well as how the verification style of model checkers and theorem provers can be combined. Chapter 7 is devoted to example verifications. A few simple verification examples are given to show the style of verification, and then some large verification examples are given. This chapter shows that the methodology proposed here can be effectively implemented. Chapter 8 is a conclusion, and the appendix contains some of the more technical proofs and example programs. A Guide to the Reader This thesis contains many definitions, theorems and a significant level of mathematical notation. A reader may find the index at the end of the thesis and the List of Important Definitions, Theorems and Lemmas starting at page ix useful in finding cross-references. The nature of the research requires that the thesis contain many proofs. Many of these proofs are highly technical and uninteresting in themselves; this does not make for the easiest or most captivating reading, for which I apologise. I have tried to make the exposition of proofs as clear as possible, and have adopted the following convention for proofs. Each step in the proof contains three parts, laid out in three columns: a label, a claim, and a justification. The justification may refer to previous steps in the proof using the labels given.  Chapter  1.  17  Introduction  Lemma 1.1 (Example). If S is a complete lattice under the partial order C , and g : S -» <S is monotonic with respect to C , then for all s,t £ S, g(s) U #(*) C 5(5 U t). Proof.  (1)  Definition of join.  (2) tQsUt  Definition of join.  (3) g(s)Qg(sUt)  From (1) by monotonicity of g  (4) g(t)Qg{sUt)  From (2) by monotonicity of g  (5) g(s)Ug(t)Og(sL\t)  From (3), (4) by property of join  Chapter 2 Issues in Verification  This chapter is intended to place the thesis work in perspective and relate the research to other work. It is not intended as a comprehensive survey of verification, and therefore some simplifications are made and important verification methods skimmed over. For fuller surveys on the topic see [73,97, 119]. Overview of Chapter Section 2.1 briefly introduces a method of representing boolean functions. Since boolean expressions are used extensively in verification for a variety of purposes, efficient methods for representing and manipulating them is essential. The review of verification starts with Section 2.2, which introduces two of the main styles of verification. In one style, verifying a model means checking whether the model has certain properties. In the other method, two models are compared to see whether a certain relationship holds between the models (for example whether they have equivalent observable behaviour). For each of the styles of verification, there are a number of possible verification techniques. Section 2.3 gives a brief overview of some of the large number of verification techniques, differing in approach and detail, that have been proposed. Section 2.4 examines one of these proof techniques in more detail; the method of symbolic trajectory evaluation proposed by Bryant and Seger forms the basis of this thesis. Due to the computational complexity of verification, all these methods have limitations, and  18  Chapter 2. Issues in  Verification  19  research continues in trying to improve upon them. Much of this research in improving verification techniques deals with the search for better algorithms and data structures. Although this is very important, the underlying complexity limitations indicate that something more is needed. Two of the most promising lines of research in this regard have been the work in compositionality and abstraction. They are discussed in Sections 2.5 and 2.6. Section 2.7 concludes the review with a brief discussion of the issues raised in this chapter. 2.1  Binary Decision Diagrams  Many verification techniques — including STE — represent boolean expressions with a data structure called (ordered) Binary Decision Diagrams (BDDs). BDDs are a compact, canonical method for manipulation of boolean expressions [22]. A BDD is a directed, acyclic graph, with internal vertices representing the variables appearing in the expression. BDDs are ordered in the sense that on all paths in the graph, variables appear in the same order. Using this representation, operations such as conjunction, negation and quantification and equivalence testing can be efficiently implemented. Boolean expressions are used to represent state information and truth of propositions, so it is critical that they can be manipulated efficiently. BDD-based approaches have been extremely successful. Unfortunately, although BDDs are a very compact representation, there are things that cannot be represented efficiently. This is not surprising; the satisfaction problem [63] can be represented and solved using BDDs so if a BDD representation polynomial in size could be constructed in polynomial time, this would imply that P=NP. Some arithmetic problems cannot be represented efficiently. For example, multiplication of two integers (represented as bit-vectors) requires BDDs exponential in size. For a discussion of the limitations of BDDs, see [21].  Chapter 2. Issues in Verification  20  One of the critical issues when BDDs are used is the ordering of variables used in the construction of the graph. The size of the resulting BDD may be highly dependent on the variable ordering, so it is vital that a good variable ordering is used. In general human intervention is needed to determine a good ordering, but fortunately for many real problems a good variable ordering can be found, and heuristics for dynamic variable ordering can be applied successfully. So, while the need to find good variable orderings is an issue, it is not a fundamental problem with BDD-based methods. Although BDD-based approaches are very successful, there are a number of other successful approaches that do not use BDDs. Some of these are described below. The success of BDDs has also motivated research on other data structures for representing data that BDDs cannot represent efficiently (these are mentioned below too). 2.2 Styles of Verification To say that a program or circuit is verified is to say that there is a proof that certain mathematical statements are true of a model of that system. This section looks at the different ways of expressing these statements, while Section 2.3 looks at how the statements are proved to hold. Section 2.2.1 introduces the property checking approach. The idea here is that there is a formal language for expressing properties of the system, and verification consists of proving that these properties hold. Section 2.2.2 leads on from this by introducing modal and temporal logics; these are logics that are commonly used to express properties of interest. This is the style of verification adopted in this thesis. Section 2.2.3 introduces the other style of verification, model comparison. Here, two models of the program or circuit are expressed formally (typically, a specification and an implementation), and verification consists of proving that the two models are equivalent.  Chapter 2. Issues in  2.2.1  21  Verification  Property Checking  One major approach to verification is to determine whether a description of a program has (or 1  does not have) a set of properties. Turing showed that this problem — one of the foundational problems in computer science — is, in general, undecidable: for example, there is no general method for determining whether a program halts, or whether it prints out a zero [124]. One of the landmarks in program verification was the development of the Floyd-Hoare logic used to describe the behaviour of sequential programs (introduced in [82], and see [67] for a good introduction). In this logic, verification results are written in the form {A}P{C}, A  and C are logical formulas and P is the program segment. This Hoare  triple  where  says that if  A  holds when P starts executing, then if P completes then C will hold. There have been many other approaches used to describe the behaviour of sequential (see [7] as a good example of this style of proof) and concurrent programs (see [96] for an example). This style of verification can be used for small programs, and can be appropriate for small, complicated algorithms. However, on a larger scale it is not useful as it is just too tedious to use, especially for hardware systems. There are many ways of expressing properties. For example, one approach has been to perform reachability analysis on the program (e.g. discovering whether there are any deadlock states). Another approach — the one adopted in this research — is to use some form of logic to express properties. Often modal or temporal logics are used for reactive systems. 2.2.2  Modal and Temporal Logics  Modal logics are systems of logic for describing and reasoning about contingent truths. The type of modal and temporal logics of interest here are used to describe the behaviour of systems As the term 'system' can be ambiguous since it can refer to the system being verified, or the tool performing verification, the term 'program' is used in a generic sense to describe the system being verified, whether or not the system is represented as a program, a finite state machine, a netlist etc. 1  Chapter 2. Issues in Verification  22  that have dynamic or evolving structure. Many of these logics have been proposed; see the works of Galton [62], Emerson [52] and Stirling [120, 121] for overviews. Typically, a set of formulas of the logic form the specification of the program being verified. The verification task is to test whether the mathematical structure representing the program satisfies this set of formulas. The wide variety of modal logics reflects both the wide variety of application and complexity of the topic. Modal logics and the mathematical structures over which they are interpreted differ in expressiveness. Issues such as non-determinism and the ability to express recurring properties greatly affect issues such as usefulness, decidability and computational complexity. Temporal logics are particularly useful in verification. They can be used to specify the behaviour of a system over time. Time can be a 'real' time, or some abstraction thereof; and can also be modelled as continuous or discrete. The method proposed in this thesis has the advantage of being able to model time fairly accurately. The most powerful logic of interest here is the family of modal /i-calculi, variously attributed to Park and Kozen. The expressive power of the //-calculi, determined by the modal operators available, have a marked effect on the decidability of logic: for example, the linear time modal //-calculus is decidable, while the branching time modal //-calculus is not [55]. Other modal and temporal logics are restricted versions of the //-calculus. CTL*, CTL, LTL, and the Hennessy-Milner logic are good examples of logics which can be encoded within a version of the //-calculus. There are a number of ways in which temporal logics can be classified (see [52] for details). The most important question is whether the logic is branching time or linear time (see [53] for some discussion of this).  Chapter 2. Issues in Verification  2.2.3  23  Model Comparison  In this approach, two models or descriptions are compared to see whether a given relationship holds between them. The most important relationship is equivalence, but there are other useful relationships. One way of using this form of checking is for one description to be a specification of a system and the other description to be an implementation. Showing that a formal relationship holds between the two descriptions shows that the implementation is correct. General versions of this problem are undecidable. Turing machine equivalence is the best example, and these decidability results apply to popular methods such as process algebras (as CCS can encode Turing machines this has direct relevance to much work in this area). However, there are restricted, useful versions of the problem which are decidable (see, for example [32]). There are a number of different ways in which models can be represented. What equivalence and more general types of relationships mean and how they are checked depends very heavily on this. Three of the main approaches are: 1. Process algebras such as CCS [102] and CSP [20]. There are many different types of equivalence which depend on how fine-grained an equivalence is desired (see [102] for a discussion of this). There are a number of other relationships which are denned as preorders on processes. These can be used to define correct implementations of specifications. See [80] for examples. A good example of this approach is the LOTOS specification language which is based on CCS and CSP [15]. Equivalence and implementation relationships can be used to show that one LOTOS program is a correct implementation of another. 2. Language containment. If the descriptions are finite state machines, then equivalence  Chapter 2. Issues in  Verification  24  may be language equivalence. Some verification problems can be posed as language containment problems. See [73] for an overview. 3. Logic. Equivalence is logical equivalence. Other logical relationships such as implication may be suitable for showing that a model is a correct implementation of a specification. See [94] for an example. Other approaches exist (e.g. [74]). There is a close relation between equivalence checking and property checking. In CCS, two processes are bisimilar exactly when they satisfy the same set of formulas of the HennessyMilner logic [81]. Grumberg and Kurshan have shown a relationship between classes of CTL* formulas and language equivalence or containment problems [71]. 2.3 Proof Techniques Now that we have defined what we mean by program correctness, we can examine proof techniques. Wefirstlook at why formal proof techniques are important, and then examine some of these techniques: Section 2.3.1 discusses theorem proving; Section 2.3.2 discusses automatic techniques suitable for proving equivalences; and Section 2.3.3 discusses model checking, an approach that can be used to prove that models of systems satisfy temporal logic formulas. Section 2.4 presents the model checking that forms the basis of this thesis in more detail. Hand proof techniques are the most ubiquitous for a variety of reasons. They are powerful methods which allow a variety of proof techniques, appropriate informal arguments, and abstractions to be made. However, there are two important reasons why hand proofs are avoided in the context of verification, particularly hardware verification. First, proofs are extremely tedious to perform. Often they are not complex but have large amounts of intricate detail which is difficult and unpleasant for humans to keep track of. Second, errors are extremely likely. These two factors are related.  Chapter 2. Issues in  Verification  25  Some examples illustrate this. In [102], Milner presents the 'jobshop' example — a specification, implementation and a proof of weak bisimulation between the two. The proof has many errors. Most of these errors are trivial; however, there is a serious theoretical error on which the proof relies which was only detected much later. It must be emphasised that this is a best case scenario for hand proofs: the models and notation are fairly abstract, the proofs fairly short and interesting in themselves, and the person making the proof of undoubted mathematical ability.  2  There are many other examples like this; they show how fallible and time-consuming hand proofs are (see [ 112]). The alternative to hand proofs are machine checked and automated proofs. A machine checked proof is a proof that has each step validated by a program that implements some logical inference system. An automated proof is one which is generated without human intervention according to some set of sound rules. Often the performance of these systems may depend on extra information given by the human verifier. These approaches have been applied to both the equivalence and property-checking types of problems. 2.3.1 A  Theorem Proving  theorem prover  is a program that implements a formal logic. Using this program, statements  in the logical system can be proved. Typically, the logical system consists of a set of axioms and inference rules, and the program ensures that all theorems are sound in that they are derived from the axioms by application of the inference rules. Although much work has gone into automatic theorem proving the key aspect is mechanically checking each step rather than the automatic derivation of the proof. Theorem provers can be used to prove theorems about any mathematical system. Within the For hardware verification the converse is true: the level of abstraction is low with intricate detail, the proofs are long and tedious and of no intrinsic interest, and few people doing verification are Turing Award winners. This criticism extends to other domains too. In a recent paper, Bezem and Groote present the verification of a network protocol in a recent paper [13]. The proof is very lengthy and detailed. The claim that this is not such a problem because the proofs are 'trivial' is unconvincing. 2  Chapter 2. Issues in  Verification  26  verification area, there is a strong theorem proving community and a range of different theorem provers have been used in verification tasks. Some examples of theorem provers and work done with theorem provers are: • HOL [68]. HOL is one of the first and best known theorem provers. It was built on work done on the development of LCF [69] in the 1970s (see [60] for a brief history). HOL implements a strongly typed higher-order predicate logic. The user's interface to HOL is through M L [107], a polymorphic, typed functional language. This interface promotes both security (by ensuring through the type system that only theorems proven in HOL can be proved) and flexibility by allowing the programmer access to a fully programmable script language. HOL has been used on the verification of a number of systems. • Boyer-Moore [17]. This theorem prover is based on a quantifier-free first order logic. It is heavily automated, although a user can (must?) 'train' the prover to deal with particular proofs. An example of a substantial verification effort using this system can be found in [86]. • PVS [106,105] is also theorem prover based on a typed, higher-order logic. It has a number of decision procedures built in which allow a number of the proof obligations to be discharged automatically. See [112] for an example use of PVS. Theorem provers can be used for either equivalence or property checking. For example, if both the specification, S, and implementation, /, are logical formulas then asking whether S and / are equivalent means asking whether S = I is a theorem in the logic. In the property checking approach, Gordon shows how a simple theorem prover can be used to prove program correctness using the Floyd-Hoare logic [67]. Theorem provers have also been used in model checking; some work is directly relevant to this thesis. For example, Bradfield describes a 'proof assistant' for model checking //-calculus formulas over Petri nets. The  Chapter 2. Issues in Verification  27  proof system is a tableau-based one (see below). At each step in the proof either the prover itself applies a proof rule, or the user does [18]. Sometimes, this type of approach leads to a hybrid system which uses both the automatic model checking algorithms described below and theorem proving approaches - this is discussed in more detail later. 2.3.2 Automatic Equivalence and Other Testing For certain systems which can be represented as labelled transition systems (such as certain classes of CCS agents), the Concurrency Workbench has algorithms for computing different kinds of equivalences and preorders [41]. The two advantages of using equivalences such as bisimulation over language equivalence are: • Bisimulation can distinguish behaviour which language equivalence can not. • There are significant computational advantages. For example, deciding regular language equivalence is PSPACE-complete, while the best known algorithm for deciding bisimulation between two regular processes is 0(m log n) where ra is the number of transitions in the process and n is the number of states [103]. Forfinitestate systems, CCS agents can be represented using BDDs [54], from which equivalence relations can be computed [27]. Other work in this line includes a tool which can compute equivalence of LOTOS programs [56]. Other approaches can also be applied to transition systems, see ([1, 14]). 2.3.3  Model Checking  Given a model of a system behaviour, M , and a temporal logic formula g interpreted over M, the model checking problem is to find out whether g holds of M, or whether M is a model of  Chapter 2. Issues in Verification  28  g. Typically this is written as M \= g. Variations such as finding whether a set of states or a set of sequences of states satisfies the formula are used too. Model checking is a difficult problem: some useful versions are undecidable [55], and the satisfiability and model checking problems for even simple modal logics are NP-hard [63, 52]. Forfinitestate systems, whether a structure is a model of a system can be determined directly from the satisfaction relation by explicit state enumeration: however, except for small systems this is rarely feasible. Tableau-based Methods The tableau-based method is one of the best-known methods and a number of variations have been implemented (the best known implementation is the Concurrency Workbench [41]). Although the underlying proof method is very different to the method of symbolic trajectory evaluation, this method is of some relevance because tableau systems use rules of inference, and because there has been much work in compositional reasoning. Good introductions to the tableau method are [19, 121]. Note that tableau methods do not always require the construction of the global state space. A tableau is a proof tree built from a root sequent of the form S h $ (this is the goal sequent). 1  The tree is built using one of the tableau rules until all the leaves of the tree are terminals. If all the terminals are 'successful' then S \= $. The most important and difficult part of the tableau construction is dealing with thefixed-pointoperators, particularly the least fixed point operator. Stirling and Walker proposed a sound and complete tableau system forfinite-stateprocesses [ 122]. They showed that the tableau-construction always terminates, making this method an effective model checking scheme. They also show how the model checking algorithm of Winskel [127] can be incorporated in a tableau scheme. Bradfield extended the tableau approach to infinite state systems [ 19]. Dealing with the fixed  29  Chapter 2. Issues in Verification  point operators is more complicated as the definition of a successful terminal takes some care. His approach is sound and complete. If 5" j=  then using the rules automatically will derive  S h <&, and S h $ will only be derived when £ |=  Note, however, that if Sty=$, his  algorithm may not terminate. Automatic Model Checking through State Exploration For finite state systems, it is feasible to model check some logics through state exploration methods. Although model checking expressive temporal logics such as CTL* is very expensive (the problem is PSPACE-complete), for less expressive logics there are better results (note that model checking LTL is also PSPACE-complete). The best known result is one for model checking the subset of CTL* known as CTL [36]. This algorithm works by building the state transition graph, and then using graph algorithms to label the states in the graph. The algorithm is 0(15110|) where l^l is the number of states in the system, and |</>| is the size of the formula 4>. Recently this work has been extended to show how a richer logic CTL can be model checked 2  with the same complexity result [12]. Although the algorithm is linear in the size of the state space, this is a significant limitation since the size of the state space in many realistic systems is extremely large (a very small circuit with only 100 state holding components can have a reachable state space of size 2 ). 100  State exploration methods can be extended to some types of infinite systems. Burkart and Steffen have developed a state exploration method for effective model checking of the alternationfree /^-calculus for context-free processes [29]. (A local model checking version based on tableaux has also been developed [85]).  Chapter  2. Issues in  Verification  30  Symbolic Model Checking For finite systems symbolic model checking methods are very popular and have had success in a number of applications. The use of BDDs has revolutionised model checking by providing a compact method for implicit state representation, thereby increasing by orders of magnitude the size of the state space that can be dealt with. (Other approaches exist too [14,46]: however BDDs seem to be most effective for a large class of problems.) The most well-known work based on symbolic model checking and BDDs has emerged from Carnegie Mellon University. A number of model checking algorithms for the modal pcalculus and other logics have been developed [26, 27]. The SMV verification system based on these ideas has successfully verified a range of systems [26, 98]. The basic idea of these approaches is to represent the transition relation of the system under consideration with a BDD. A set of states is also represented with a BDD. Given a formula of the temporal logic, the model checking task is to compute the set of states that satisfy the formula. The operations defined on BDDs allow the computation of operations such as existential quantification, conjunction etc. Using these BDD operations, it is possible to compute the set of reachable states and the set of states satisfying a given formula. Although these methods have had some success, the computational complexity and cost of model checking remains a significant stumbling block. Symbolic CTL model checking is PSPACE-complete in the number of variables needed to encode the state space [98]. A number of approaches have been suggested to improve the performance of the algorithm: compositional approaches; abstraction; and improving representational methods (for example, partitioning the next state relation [26]). That BDDs revolutionised automatic model checking indicates the importance of good and appropriate data structures, and motivates the search for new ones, and considerable work is  31  Chapter 2. Issues in Verification  being done on extending BDD-style structures and developing new ones [24, 34, 99]).  3  All these approaches to improve symbolic model checking need to be pursued. Circuits with wide data paths are not suitable for verification with SMV, which itself is unable to verify circuits with arithmetic data. However, by extending the method through the use of abstraction [39] or more sophisticated data structures [35] such circuits can be verified. There are other symbolic model checking approaches. Symbolic trajectory evaluation — a central part of this thesis — is one. It differs from other approaches in the novel way in which the state space is represented. Although the logic which it supports is limited it has been successfully used in hardware verification [8,47]. Full details are given later. Other symbolic methods have been proposed in [16, 43, 87]. Combining Theorem Proving and Model Checking Since combining model checking and theorem proving has considerable promise, research has been done in combining the two approaches in different technical frameworks. Seger and Joyce linked the HOL and Voss systems. This allows the HOL theorem proving system to reason about properties of a circuit by using the model checking facilities of Voss [117]. Although there are some similarities between the prototypes presented in this thesis, and the HOL-Voss system, there are two important distinctions: • One of the important uses of a theorem prover with the Voss system is to reason about objects that do not have concise BDD representations in all cases — for example, integer expressions. Rather than providing a general and powerful theorem prover such as HOL, simple semi-automated methods are used to provide the prototypes the ability to do this (see Section 6.2). Although not as powerful as HOL, it is much simpler. • The prototypes provide specialised theorem provers that implement a compositional theory for STE. The use of this compositional theory increases the power of the verification 3  Some of these approaches are applicable to other model checking approaches too.  Chapter  2. Issues in  Verification  32  approach significantly. Kurshan and Lamport have combined the COSPAN model checker with the TLP theorem prover [93]. The model checker proves properties of components of the system, which are then translated into a form suitable for the theorem prover. In order to prove the overall result, a number of sub-results need to be proved. Not only is the way in which composition is handled different to the way it is in this thesis, there are also two very important practical distinctions: first, their approach is not entirely mechanised; second their approach relies on linking two quite distinct tools and using two distinct formalisms, rather than one integrated tool and verification style. The style of the method of Hungar [84], who also links model checking and theorem proving, is closest to the method of combining model checking and theorem proving proposed in this thesis. The model is given by a Kripke structure representing the semantics of an Occam program, and the properties are expressed in a variant of CTL. The results generated by model checking are combined using the LAMBDA theorem prover. The proof system consists of rules for inferring results using an assume-guarantee style of reasoning. The inference rules used are: embedding, modus ponens, conjunction and weakening. Given an Occam program consisting of a number of processes, properties can be proven of each process using the model checker, and the properties combined. An important distinction between the model used in [84] and the model used in this thesis is that in Hungar's framework, each process has its own model — the model for the entire program is the composition of these models. In the compositional theory proposed in this thesis, a model is given for the entire system, and it is not necessary to give a model for the components of the system. Raj an etal. have combined a //-calculus model checker with PVS by using the model checker as a decision procedure for PVS [111]. They demonstrate how such an integrated system can be  33  Chapter 2. Issues in Verification  used. Using the ideas of Clarke et al. discussed below, they create an abstraction of a circuit to be verified. Using theorem proving they show that the abstraction has the required properties. Using model checking they show that the abstraction satisfies the specification. In an alternative approach, Dingel and Filkorn verify abstractions of a system using model checking, using certain assumptions about the system environment [49]. Theorem proving is used to prove the correctness of the abstraction and to ensure that the system environment assumptions are met. 2.4  Symbolic Trajectory Evaluation  This section briefly outlines the existing STE based approach. This is useful in the later discussion and will help illustrate some of the novel aspects of the thesis. Symbolic trajectory evaluation wasfirstproposed in [23] and the full theory can be found in [116]. Good examples of verification using STE can be found in [8,47]. This section is heavily based on the presentation of STE found in [77]. The model of a system is simple and general, a tuple M = ((<S, C ), Y ) , where (<S", Q ) is a complete lattice (<S being the state space and C a partial order on S) and Y is a monotone successor function Y: S —> S. A sequence is a trajectory if and only if Y(<r ) Q a l  l+1  for i >  0.  2.4.1  Trajectory formulas  The key to the efficiency of trajectory evaluation is the restricted language that can be used to phrase questions about the model structure. The basic specification language used is very simple, but expressive enough to capture many of the properties we need to check. A predicate over S is a mapping from S to the lattice { F , T } (where F C T ) . Informally, a predicate describes a potential state of the system: e.g., a predicate might be (A is x) which  Chapter 2. Issues in Verification  34  says that node A has the value x. A predicate is simple if it is monotone and there is a unique weakest s G S for which p(s) = T . TF, the set of trajectory formulas is defined recursively as:  1. Simple predicates: Every simple predicate over S is a trajectory formula. Simple predicates are used to describe simple, instantaneous properties of the model. 2. Conjunction: (Fi A F ) is a trajectory formula if Fi and F are trajectory formulas. Con2  2  junction allows the combination of formulas expressing simpler properties into a formula expressing a more complex property. 3. Domain restriction: (e —>• F) is a trajectory formula if F is a trajectory formula and e is a boolean expression over a set of boolean variables, V. Through the use of boolean variables, a large number of scalar formulas (formulas not containing variables) can be concisely encoded into one symbolic formula. 4. Next time: (NF) is a trajectory formula if F is a trajectory formula. Using the next time operator allows the expression of properties that evolve over time. An interpretation of variables is a function, <f> : V ->• { F , T } . An interpretation of variables can be extended inductively to be an interpretation of expressions. The truth semantics of a trajectory formula is defined relative to a model structure, a trajectory, and an interpretation, <f>. Whether a sequence a satisfies a formula F (written as a \= F) is given by the following rules. 1. a a \= p iff p(oo) = 0  2. a (= (Fi A F 2 )  T .  iff cr \= F and a |= F x  2  3. cr (= (e ->• F) iff 0(e) => (cr |= F ) , for all interpretations, </>. 4.  CT°C>  \=NF  iff  a  |=  F.  Chapter  2. Issues in  35  Verification  Given a formula F there is a unique defining  sequence,  Sp, which is the weakest sequence that  satisfies the formula. The defining sequence can usually be computed very efficiently. From 4  Sp a unique defining trajectory, Tp, can be computed (often efficiently). This is the weakest trajectory which satisfies the formula— all trajectories which satisfy the formula must be greater than it in terms of the partial order. If the main verification task can be phrased in terms of 'for every trajectory a that satisfies the trajectory formula A, verify that the trajectory also satisfies the formula C, verification can be carried out by computing the defining trajectory for the formula A and checking that the formula C holds for this trajectory. Such results are called trajectory  assertions  and we write  them as |= \ A = ^ > C ). The fundamental result of STE is given below. Theorem 2.1. Assume A and C are two trajectory formulas. Let T& be the defining trajectory for formula A and let 8c be the defining sequence for formula C. Then f= t\  A=g>C  } iff 8c  ET^.D  A key reason why STE is an efficient verification method is that the cost of performing STE is more dependent on the size of the formula being checked than the size of the system model. STE uses BDDs for manipulation of boolean expressions. 2.5  Compositional Reasoning  The main problem with model checking is the state explosion problem — the state space grows exponentially with system size. Two methods have some popularity in attacking this problem: compositional methods and abstraction. While they cannot solve the problem in general, they do offer significant improvements in performance. Compositional reasoning is a critical aspect of program verification. The following advantages are stated in [51: 4  'Weakest' is defined in terms of the partial order.  Chapter 2. Issues in Verification  36  • Modularity: if a module of a system is replaced, only the module need be verified; • In design or synthesis it is possible to have undefined parts of a system and still be able to reason about it; • By decomposing the verification task, verification can be made simpler; • Re-use of verification results is promoted. The difficulty of compositional reasoning is that often it is the case that a particular component may not have a property that we desire of it when placed in a general environment. However, when placed in the context of the rest of the system, then it does display the property. See [3] for some discussion of the issues involved in this type of reasoning. For tableau-based methods, a number of approaches have been suggested. Andersen et al. have proposed a proof system for determining the whether processes of a general process algebra [5] satisfy a formula. They show that a set of 39 inference rules is sound, and - for a class offinite-statesystems - is complete. Although this is an important contribution, it is difficult to assess the impact of this work without substantive examples. Furthermore, to be practical I believe the proof system needs some form of mechanical assistance. In related work, Berezine has proposed two model checking algorithms for fragments of the yu-calculus [11] (here model checking asks whether p f= $ — does the process p satisfy $). Both methods can be used to verify problems of the form pxq \= $, where pxq represents the composition of processes p and q. Thefirsttakes the problem and constructs a formula $ such p  that q \= $ iff p x q \= $. The second constructs two formulas $ and <& such thatp x q (= $ p  p  9  iff q |= $ and p \= <!>„. As the work is preliminary, it is difficult to assess the applicability and p  effectiveness of this approach. Compositional techniques have been proposed for symbolic model checking. Clarke et al. have proposed a method for systems of concurrent processes [40]. To model check P\\E (= 4> may not be computationally feasible (where P represents a process of interest, and E represents  Chapter 2. Issues in  37  Verification  the environment). They show how an interface process, certain conditions P\\E  \= 4>iiP\\A  \= <j>.  can be constructed such that under  A,  The point of this is that the state graph of  P \\A  can be  considerably smaller that of P\\E. Although this method has theoretical interest and there are examples of systems for which it works, it has not been established how applicable this method is, and how easy (in terms of human and computation cost) it is to establish the conditions for correct application. Another approach to compositional reasoning — modular verification — is based on defining a preorder relation,^, between models [72,95]. This preorder is based on a simulation relationship between the models and has the property that if M •< M and M (= 0 then M (= 0 . 1  2  2  x  Suppose we wish to show that a process M when placed in its environment satisfies a property 0 . While M may not in general satisfy 0 , it may satisfy it whenever its environment satisfies another property 0 . Given the formula 0 , there exists a 'tableau' M$ which is the strongest element in the preorder which satisfies 0 . If E •<  and M\\M^ \= 0 , then by the property  of the preorder, M\\E \= 0 . The verification therefore includes proving the simulation relation and performing model checking. Both of these steps are automatic, using symbolic algorithms. This method is only applicable to finite state systems. Aziz et al. propose a compositional method dependent on the formula being checked [6]. The model is represented as a composition of state machines. Given a formula to be checked, an equivalence relation is computed for each machine which preserves the truth of the formula. Using these equivalence relations, quotient machines are constructed and the composition of these machines computed. This composition will have a smaller state space than the original composition and can be used to determined the correctness of the formula. Other compositional approaches exist too. Some of these focus on the question of the refinement of a specification into an implementation. They tend to use hand proofs. Examples of other approaches include [3, 89, 93].  Chapter 2. Issues in  38  Verification  2.6 Abstraction The idea behind abstraction is that instead of verifying property / of model M, we verify property $A of model MA and the answer we get helps us answer the original problem. The system M  A  is an abstraction of the system M. One possibility is for the abstraction MA to be equivalent (e.g. bisimilar) to M. This some-  times leads to performance advantages if the state space of MA is smaller than M, but usually this type of abstraction is used in model comparison (e.g. as in [74]). Typically, the behaviour of an abstraction is not equivalent to the underlying model. The abstractions are conservative  in that M  A  satisfies /A implies that M satisfies / (but not necessarily  the converse). Some examples of abstraction methods are [50, 70, 83, 95]. In hardware verification, abstraction is particularly needed in dealing with the data path of circuits. A drawback of abstraction is that it takes effort to both come up with the suitable abstraction (see [37, 123]) and prove that the abstraction is conservative. For an example of this type of proof see [28]. Clarke etal. define abstractions and approximations [39]. They show how an approximation can be abstracted from the program text without having to construct the model of the system. They provide a number of possible abstractions: congruence modulo an integer (the use of the Chinese remainder theorem); representation by logarithm; single-bit and product abstraction; and symbolic abstraction. They show how this is used on a number of examples.  2.7 Discussion Although equivalence checking is also attractive, this thesis explores one model checking because of its success in verifying large state spaces. Moreover, in some situations it is not appropriate or possible to have a formal model to compare an implementation against (although work such as [130] offers some ideas in how such a model could be built from a set of properties).  Chapter 2. Issues in  Verification  39  Theorem provers and model checkers both have strong adherents because both methods have had successes. However, they both have weaknesses. Automatic verification techniques have the advantage of being automated, but have limitations on the size of the systems that they can deal with, and theorem proving methods, while very powerful, are still computationally intensive and require a great deal of skill. Work such as [77, 84, 93, 117] among others shows that there is much to be gained from combining the approaches. The vision adopted in this research is that symbolic model checking is used to prove lowlevel properties of the system which would be very tedious for the theorem prover, while the theorem prover — partly automated — is used to prove higher-level properties. Efficient model checking is very important. Although tableau-based methods are powerful and attractive in some situations, BDD-based methods are more appropriate forfinitestate systems, especially VLSI circuits. Although progress has been made, much work remains to be done to improve performance by examining issues such as abstraction, composition and methods for state and transition relation representation. There are many different criteria for evaluating verification methods, depending on application and setting (for some discussion of this, see [119]). Three criteria for evaluating the approaches discussed above are: 1. Range of application; 2. Performance; 3. Degree of automation/ease of use. For a fuller discussion of the use of verification methods in industry, see [119]. A problem with this area is that because verification is very difficult, methods tend to be suited for particular applications. Often it is difficult to compare approaches because they solve different problems. The types of properties to be checked for and the way in which the model is represented are critical. For example, verifying multiplier circuitry modelled at the switch  Chapter 2. Issues in  Verification  40  level where timing is critical is a very different proposition to dealing with a very high level description of an algorithm where timing is not an issue. Furthermore, because many of these problems are so difficult (i.e. are NP-hard) analytic categorisations of different algorithms are not always very useful. Empirical results are also difficult to analyse since verifications are run with systems on different hardware architectures and written in different languages. Particularly difficult to measure is how easy the verification method is to use (how automatic is an automatic verification method) for different classes of user. Many examples in the literature give a few examples but fail to give convincing evidence that the method will work on a larger class of problems. All of this is exacerbated by a lack of published empirical results. Work with detailed performancefiguresis available (such as [26]) but important theoretical contributions such as [5, 72] come with no performance results and only small examples to illustrate the applicability of the method. The importance of gaining more experimental results has been recognised ([26] is a good example), and the IFIP working group on hardware verification has recently established a benchmark suite to help facilitate comparative work [91]. Chapter 7 presents some experimental data in order to evaluate the methods proposed in this thesis.  Chapter 3 The Temporal Logic TL  This chapter introduces and defines the quaternary temporal logic at the core of the research. Section 3.1 describes the model over which formulas of the logic are interpreted: a complete lattice is used to represent the set of instantaneous states, and a monotonic next state function is used to represent system behaviour. This gives a way of formally describing an implementation of a system such as a VLSI design. Section 3.2 defines Q, a quaternary logic, and proves elementary properties of this logic. Using Q as a base, the quaternary temporal logic TL is defined in Section 3.3. The syntax of TL formulas is given; the truth of these formulas is defined with respect to sequences of states of the model. TL gives a way of describing intended behaviour. The primary application of the theory presented in this thesis is for circuit models. For convenience, and as these models have useful properties, it is appropriate to specialise the temporal logic for circuit models. This is discussed in Section 3.4. A critical question is whether the model satisfies the intended behaviour. Section 3.5 is a precursor for the discussion of this question in Chapter 4 by presenting alternative semantics for TL; these semantics illustrate the idea on which the model checking approach of Chapter 4 is based. 3.1  The Model Structure  The model  structure  ((<S,  C ), TZ,  Y) represents the system under consideration.  41  Chapter 3.  The Temporal  Logic  42  TL  • S, a complete lattice under the information ordering • , represents the state space. Let X be the least element in S.  (When S = C , then X = U .) n  n  • 71 C S, the set of realisable states, represents those states which correspond to states the system could actually attain — S — 71 are the 'inconsistent' states, which arise as artifacts of the verification process. Which states are realisable and which are inconsistent is entirely up to the intuition of the modeller; the entire state space could be realisable, or only part of it. Verification conditions will be of the form: do sequences that satisfy g also satisfy hi Distinguishing unrealisable behaviour from realisable behaviour allows the detection of cases where verification conditions are vacuously satisfied: if it is the case that no sequences with only realisable states satisfies g then the verification condition may indeed by satisfied. However, it is likely that either the specification or implementation are wrong. On the other hand, it may be that for all sequences of realisable states the verification conditions are satisfied, but that some sequences with unrealisable behaviour satisfy g but do not satisfy h. If we consider the set of all sequences, the verification condition will fail; if we consider only the sequences of realisable states the verification conditions succeed. Thus, the concept of realisability allows the modeller to deal with inconsistent information in a sensible way: detecting vacuous results and ignoring degenerate cases. There is a technical requirement: 71 must be downward closed, so that if x 6 11, and yQx  then y <G 1Z. This makes computation much easier and has a sound intuitive basis.  Intuitively, if a state is not realisable, it is because it is 'inconsistent'; any state above it in the information ordering must be even more 'inconsistent' and thus also not realisable. Conversely, if a state is 'consistent', then a state below it in the information ordering will  Chapter 3.  The Temporal  Logic  43  TL  Figure 3.1: Inverter Circuit  also be'consistent'. • Y : S —> S is a monotonic next state function: if s Q t then Y(s) C Y ( i ) . Although the next state function is inherently deterministic, the partial-order structure of the state space can model non-determinism to some extent. A useful analogy here is that a non-deterministicfinitestate machine can be modelled by a deterministic one — in the deterministic machine, a state represents a set of states of the non-deterministic machine. In the same way, in our partial-order setting, a state represents all the states above it in the partial order. By embedding a flat, non-deterministic model in a lattice, the model 1  becomes deterministic. The next state function Y can be thought of as a representation of the next state relation {{s,t)<ES xS  :Y{s)Qt}.  Therefore, although technically we deal with a deterministic system, the deterministic system models non-deterministic behaviour.  Example For synchronous circuit models, the most important way in which non-determinism is used is to model input non-determinism, that is the non-deterministic behaviour of inputs of a circuit. One way of modelling the fact that inputs of the circuit are controlled by the environment, not by 1  A set without any structure.  Chapter 3. The Temporal Logic TL  44  the circuit, is to have a non-deterministic next state relation. For example, consider the simple inverter circuit of Figure 3.1. If the model structure uses a flat state space, the state space and next state relation shown in Figure 3.2 are likely candidates for the model structure. For the next state relation, for each row there is a transition from the state in the first column to each state in the second column.  {(L,L),(L,H),(H,L),(H,H)}  (a) State space  From  To  (L,L) (L,H) (H,L) (H,H)  (L,H),(H,H) (L,H),(H,H) (L,L),(H,L) (L,L),(H,L)  (b) Next state relation  Figure 3.2: Inverter Model Structure — Flat State Space If a partial order state space is used, one way of constructing the model structure is shown in Figure 3.3. Figure 3.3(a) shows the state space, and Figure 3.3(b) gives the next state function. A c entry in the table means that this row holds for all c G C (Z,Z) (L,Z) (ZJ0(ZjH)(H,Z)  X  X  X  (L,L)(L,H)(H,L)(H,H)  X  X  X  (L,U)(U,L)(U,H)(H,U) (U,U)  (a) State space  From To (U,Z) (Z,c) (L,c) (H,c) ( M  (U,H) (U,L) (U,U)  (b) Next state function  Figure 3.3: Lattice-based Model Structure  45  Chapter 3. The Temporal Logic TL  Branching time versus linear time One of the key issues of temporal logics is whether the logic is linear time or branching time. Since the next state function of the model is deterministic, and since in practice all temporal formulas used are finite, the question of whether the logic used is linear or branching time is rather a fine point. Nevertheless, as trajectory evaluation has been described as a linear time approach [26, p. 403], and as non-determinism can be represented by the model structure, the topic should be discussed briefly. 'Logically the difference between a linear and a branching time operator resides with the possibility of path switching . . . ' [120]. The model structure proposed here deals with nondeterminism by merging paths where necessary. If in the flat model structure there are nondeterministic transitions from state s to states i i , . . .  , tj,  in the lattice model structure there is  a state t, such that t C t for i = 1,... , j, and a single deterministic transition from s to t. %  Consider the non-deterministic transition diagram shown in Figure 3.4. The difference between linear time and branching time semantics is nicely illustrated here. Suppose an instantaneous property g is true in states si and s and false in all other states. With a linear time semantics, 3  we can express the property that in all runs of the system, there exists a state from which time all states in the run have the property g. This cannot be expressed in a branching time semantics: for example, in the run S0.s3.s3.s3 • • •, a branching time semantics always detects the possibility of path-switching and takes into account the potential of a transition from s to s . 3  Using a lattice structure, instead of using the set S =  {s ,... 0  ,s} 3  2  as the state space, we use  a subset of the power set of S. The state space shown in Figure 3.4 is embedded in the lattice state space shown in Figure 3.5(a) (here, the partial order is shown by dotted lines). The next state relation of Figure 3.4 is replaced with the next state function shown in Figure 3.5(b) (note, only states reachable from s are shown in this transition diagram). Note how the two non0  deterministic transitions from s to si and s in Figure 3.4 are merged into one deterministic 0  3  46  Chapter 3. The Temporal Logic TL  Figure 3.4: Non-deterministic state relation  transition from s to s shown in Figure 3.5(b). 0  4  {} {so}  {si}  {s } 2  {s } 3  {si,s } 3  {S1,S } 3  {S2,S3>  {«l,S2,-53}  {si,s ,s } 2  3  {s ,si,s ,s } 0  2  3  (a) Partial order  (b) Transition function  Figure 3.5: Lattice State Space and Transition Function  By using the model structure adopted here, non-deterministic paths that exist in aflatmodel structure are merged, losing information in the process. It is possible to ask the question whether in all runs of the system property g holds; however, the answer returned will be 'unknown.' So,  47  Chapter 3. The Temporal Logic T L  it would not be accurate to characterise the logic proposed here as either linear time or branching time, since the distinction between the two is blurred. As the expressiveness of the logic and the type of non-determinism used in models is limited compared to many other verification approaches, this question of branching versus linear time semantics is not nearly as important as in other contexts. In the inverter example above, consider the sequence a = (L, H)(U, H ) . . . in the partial order model. This represents both of the sequences (L,H)(H,H)... (L,H)(L,H).... in the flat model structure. Proving a property of a will take into account the branching structure at each state in the sequence: but it does so in a trivial way by considering (at the same time) both possible values of the input node of the inverter.  3.2  The Quaternary Logic Q  The four values of Q, the quaternary propositional logic used as the basis of the temporal logic, represent truth, falsity, undefined (or unknown) and overdefined (or inconsistent). Such a logic was proposed by Belnap [10], and has since been elaborated upon and different application areas discussed in a number of other works [59, 125]. This section first gives some mathematical background, based on [58, 113], and then definitions are given and justified. A  bilattice is a set together with two partial orders, -< and <, such that the set is a complete  lattice with respect to both partial orders. A bilattice is  distributive if for both partial orders the  meet distributes over the join and vice-versa. A bilattice is  interlaced if the meets and joins of  both partial orders are monotonic with respect to the other partial order. In our application domain, we are interested in the interlaced bilattice  <2 = { l , f , t , T }  48  Chapter 3. The Temporal Logic TL  where the partial orders are shown in Figure 3.6. f and t represent the boolean values false and true, J_ represents an unknown value, and T represents an inconsistent value. B denotes the set {f,t} (so Be  Q).  The partial order •< represents an information ordering (on the truth domain), and the partial order < represents a truth ordering. (Note, the ordering ordering -< is used to compare different  truth values).  C  is used for comparing states and the  It is very important to emphasise at this point that  lattices are used to represent truth information and state information.  < Figure 3.6: The Bilattice Q  Informally, the information ordering indicates how much information the truth value contains: the minimal element J_ contains no truth information; the mutually incommensurable elements f and t contain sufficient information to determine truth exactly; and the maximal element T contains inconsistent truth information. The truth ordering indicates how true a value is. The minimum element in the ordering is f (without question not true); and the maximum element is t (without question true). The two elements J_ and T are intermediate in the ordering — in the first case, the lack of information places it between f and t, and in the second case, inconsistent information does. Formally, the partial orders < and X are relations on Q (i.e., subsets of Q x Q). It is useful to consider the relations as mappings from pairs of elements to a truth domain (if two elements are ordered by the relation we get a true value, if not a false value). Informally, therefore, we can consider the partial orders as mappings from Q x Q to B.  49  Chapter 3. The Temporal Logic TL  For representing and operating on Q as a set of truth values, there are natural definitions for negation, conjunction and disjunction, namely the weak negation operation of the bilattice and the meet and join of the Q with respect to the truth ordering [58]. These definitions are shown in Table 3.1, and have the following pleasant properties, which makes it suitable for model-checking partially-ordered state spaces. • The definitions are consistent with the definitions of conjunction, disjunction and negation on boolean values. • These operations have their natural distributive laws, and also obey De Morgan's laws (so, the definition of disjunction was redundant). • Efficiency of implementation. The quaternary logic is represented by a dual-rail encoding, i.e. a value in Q is represented by a pair of boolean values, where: -L=  (F,F),  -f  (F,T),  =  - t = (T,F), - T = (T,T). If a is represented by the pair (a a ) and b by the pair l5  2  then a A b is represented  (h,b ) 2  by the pair (ai A 6i, a V b ), a V b by the pair (a V b a A b ) and ->a = (a , a ). These 2  2  x  u  2  2  2  operations on Q can be implemented as one or two boolean operations.  A 1  f t T  f J_ f f f J_ f f f 1  t f t T  T  f f T T  V _L  f t  T  J_ J_ J_  t t  f  t 1 t f t t t T t  T  —I  T  f  _L t  t  t  f  T  T  T  t  Table 3.1: Conjunction, Disjunction and Negation Operators for Q  x  50  Chapter 3. The Temporal Logic TL  Implication, =>-, is denned as a derived operator a =>• b = ->a V b. There is an intuitive explanation of the dual-rail encoding and the implementation of the operators. If q is encoded by the pair (a, b), a is evidence for the truth of q, and b is evidence against  q.  To compute  qi A  g , we conjunct the evidence for qi and q and take the disjunction 2  2  of the evidence against. The computation of qi V q is symmetric. And if a is the evidence for 2  q and b the evidence against q, then 6 is the evidence for -iq and a is the evidence against ->q. However nice this intuition, the definition of Q is not without problem. In the context of a temporal logic, it is hard to justify the definition that T A _L= f. Similarly, since T V J_= t, if t = q V q it is not necessarily the case that either q or q is t. Nevertheless it is the 'classical' x  2  x  2  definition, and is convenient because the dual-rail encoding is efficient. Other definitions are possible too (for example, defining the operations so that if T is an operand, the result of the operation must be T too) and might simplify some of the proofs in later sections and chapters; the particular definition adopted in this thesis is not fundamental. The following properties of Q are used in subsequent proofs. The first lemma is a consequence of the property that negating a value does not increase the information available. Lemma 3.1.  1. If q ^ -iq, then q ^ -><?. 2. If q •< q , then -iq ^ ^q . x  2  x  2  Proof.  1. If q e {-L, T}, then q = ->q. If q = t, then -.g = f: t ^ f. Similarly, f ^ t. 2. (a)  If gi =1, - g i =1. L ^  q  for all  q.  (b) If qi = t, then ->qi = f and q £ {t, T } . 2  Therefore, ->qi ^ ->q  2  (c)  Similarly, if qi = f, - ^ j •< ~^q  2  51  Chapter 3. The Temporal Logic TL  (d) If c/i = T , then q = -^q = q = ->q . x  x  2  2  Result follows by refiexivity of partial order.  • The second lemma extracts some trivial properties of Q from Table 3.1; these are useful when trying to deduce values of sub-formulas from values of formulas. Lemma 3.2. 1. If t -< qi V q , then t -< qi for at least one of i = 1,2. 2  2. If t = qi A q , then t = qi for both of i = 1,2. 2  If t -< qi A q , then t -< qi for both of i = 1,2. 2  3. If f X c/i A q , then f ^ qi for at least one of i = 1,2. 2  4. If f = c/i V c/2, then f = c/; for both of i = 1,2. If f d: qi V c/2, then f ^ c/i for both of i = 1,2.  Consider Table 3.1. 1. c/i = t or c/2 = t> or c/i =_L and q = T , or c/! = T and c/ = ± . 2  2  2.  Only when qi = q = t is c/i A c/ = t. 2  2  Only for the four bottom, right entries of the table is q\ A q X t. 2  3. c/i = f or c/2 = f; or c/j = J_ or c/ = T ; or q 2  x  = T  or q  2  =±  4. Only when q = q = f is c/i V g = f. 1  2  2  Only for the four upper, left entries of the table is f -< qi V q . 2  •  Chapter 3. The Temporal Logic TL  3.3  52  An Extended Temporal Logic  The propositional logic Q is used as the base for the temporal logic TL. This section first presents the scalar version of TL, the fragment of TL not containing variables, and then presents the symbolic version of TL, which contains variables. 3.3.1  Scalar Version of TL  Given a model structure ( ( £ , C ), 71, Y ) , a Q-predicate over S is a function mapping from S to the bilattice Q. A Q-predicate, p is monotonic if s C t implies that p(s) z< pit) (monotonicity is defined with respect to the information ordering of Q). A Q-predicate is a generalised notion of predicate, and to simplify notation, the term 'predicate' is used in the rest of this discussion. Example 3.1. Take, as an example, the state space S given in Figure 1.1 on page 9. Define g, h : S -> Q by:  ( _L  1  when s = s  whens e  {s ,s ,s } 0  2  6  0  f  whens e { s i , S 2 , s  t ^T  when 5 € {s ,s , s ) when s = sg 3  7  4  8  ,s ,s } 5  6  ,f  when s G {si, s , s }  and h(s) = < t  whens € {s , s , s }  ,, , ,  T  5  4  3  7  8  when s = sg  Figure 3.7 depicts these definitions graphically, g and h are Q-predicates. The same state space and functions will be used in subsequent examples.  • Note that in the example, s is the weakest state for which g(s) = t. In a sense, s partially 3  3  characterises g, and we use this idea as a building block for characterising predicates, motivating the next definition. Given a predicate p, we are interested in the pairs (s , q) where s is a q  weakest state for which p(s) = q.  q  53  Chapter 3. The Temporal Logic TL  SgT  SgT s f 4  s f 5  s1  s f 6  7  s1 8  s f  s f  4  5  S6±  S t S t 7  \ l  \ l  Slf  ^(5)  1/  8  /»(S)  Figure 3.7: Definition of <? and h  Definition 3.1. (sq,q) G S x Q is a defining pair for a predicate g if g(s ) = q and Vs € 5, g(s) = <? implies q  •  that s , C s .  In Example 3.1 (53, t) is a defining pair for g. If g(s) = t then s C 5. However, there is no 3  defining pair (sf,f) for g since there is no unique weakest element in S for which g takes on the value f. On the other hand (si, f) is a defining pair for h. Definition 3.2. If g: S ->• Q then Z%) = {(s , c/) G <5 x Q : (s , q) is a defining pair for g}, is the defining g  q  set of 5.  •  Using this definition it is easy to compute the defining sets of the functions g and h that were defined in Example 3.1. D(0) = { ( 3 , ± ) , ( 5 3 , t ) , ( 5 9 , T ) } O  D(/) = {(5o,-L),(si,f),(s3,t),(5 ,T)} l  9  If a monotonic predicate has a defining pair for every element in its range, then its defining set uniquely characterises it (see Lemma 3.3 below). Such monotonic predicates are called simple predicates and form the basis of our temporal logic. The following notation is used in  54  Chapter 3. The Temporal Logic T L  the next definition and elsewhere in the thesis: if g : A —> B is a function then g(A) = {g(a) : a G A} is the range of g. Definition 3.3.  A monotone predicate g: S —>• Q is simple if Vg € g(<S), 3(s , g) G -D(fi').  •  g  In Example 3.1, /i is simple since every element in the range of h has a defining pair. On the other hand, g is not simple since there is no defining pair (sf,f). Informally, g is not simple since we cannot use a single element of S to characterise the values for which g(s) = f. Definition 3.4.  Some of the important simple predicates are the constant predicates. For each q G Q, the constant predicate C (s) = q has defining set D(C ) = {(X, q)} and so is simple. q  q  •  Note that simple predicates need not be surjective; the only requirement is that if q is in the range of a simple predicate, there is a unique weakest element is <S for which the predicate attains the value q. A trivial result used a number of times here is that the bottom element of S must be one of the defining values for every predicate: this has the consequence that every element in S is ordered (by being at least as large as) with respect to one of the defining values of each monotonic predicate. Theorem 3.3.  If g,h: S -> Q are simple, then D(g) = D(h) implies that Vs G <S, g(s) = h(s). Proof. See Section A. 1  •  Chapter 3.  The Temporal  Logic  55  TL  This result is used later to show the generality of the definitions. Definition 3.5. Let G be the set of simple predicates over S.  •  We now use G to construct the temporal logic. Definition 3.6 (The Scalar Extended Logic — TL). The set of scalar TL formulas is defined by the following abstract syntax TL::= G | T L A T L | ->TL | Next TL | T L U n t i l T L  • The semantics of a formula is given by the satisfaction relation Sat (Sat : 5  W  x TL -> Q).  Given a sequence a and a TL formula g, Sat returns the degree to which a satisfies g. Suppose g and h are TL formulas. Informally, if g is simple, a sequence satisfies it if g holds of the initial state of the sequence. Conjunction has a natural definition. A sequence satisfies ->g if it doesn't satisfy g. A sequence satisfies Next g if the sequence obtained by removing the first element of the sequence satisfies g. A sequence satisfies g U n t i l h if there is a k such that thefirstk — 1 suffixes of the sequence satisfy g and thefc-thsuffix satisfies h? Note that in the definitions below, A and -• (bold face symbols) are operations on TL formulas, whereas A and -• are operations on Q. Comment on notation: Sequences are ubiquitous throughout this thesis. There is extensive need to refer to suffixes and individual elements of these sequences. Moreover, individual elements of sequences can be vectors, and on top of this, it is often useful to talk about different sequences. It is plausible to use subscripts to describe all these, but, unfortunately, there is also often a need to refer to these different concepts in close proximity to each other and so there is In the special case of g and h being simple, this is equivalent to saying that g is true of the first k — 1 states in the sequence, and h is true of the A;-th state. 2  56  Chapter 3. The Temporal Logic TL  great opportunity for confusion. To avoid this confusion, a slightly more cumbersome notation is used than might otherwise be desirable. This notation is summarised below. 1. Lower case Greek letters, <T, r , . . . are used to refer to sequences. 2. If <J  = S0S1S2...,  3. If cr =  5 5i5 0  2  then <T-, denotes s \ 8  . . . is a sequence, o->; refers to the sequence s;s,- i . . . , which is a suffix of +  c. 4. Superscripts are used to refer to different sequences, e.g. a , a . Although this conflicts 1  2  with the usual use of superscript in mathematical text, there is little chance of confusion since 'squaring' states is not defined. 5. If s is a state which is a vector of elements, then s[k] refers to thefc-thcomponent of s. For example, cr>, refers to the suffix of the sequence a obtained by removingfirsti elements 3  of cr . (<r>-)o[fc] = ° f [&] is the k-th component of the i-th element in the sequence a . 3  3  t  Definition 3.7 (Semantics of TL). Let cr = s s i 5 . . . G <5 : W  0  1.  2  tfgeGthcnSat(cr,g)  = g(s ). 0  2. Sat(cr, g Ah) = Sat(cr, g) A Sat(a, h)  3. Sat(a,->g) = ->Sat(a,g) 4. Sat(a,Nextg) = Sat(o>i,g) OO  5. Sat(a,g\Jntilh)  i—1  = V ( ( A Sat(cr g)) A Sat(a>i,h)) >jy  • Note that this is the strong version of the until operator: g need never hold, and h must eventually hold. The until operator is defined as an infinite disjunction of conjunctions. That this is well defined comes from Q being a complete lattice with respect to the truth ordering. Recall that A  Chapter 3. The Temporal Logic T L  57  is defined as the meet of the truth ordering, and V is defined as the join. Moreover, in a complete lattice, all sets have a meet and join. Therefore each conjunction is well defined, and thus the disjunction of the conjunctions is too. An intuition to support that the definition is well behaved k  i-1  is that the sequence a^= V (( A Sat(a>j,g)) A Sat(a>i h)) is an increasing sequence in Q. :  i=o  j=o  ~~  —  As Q is finite and bounded above, the sequence (a ) has a limit. k  Using these operators we can define other operators as shorthand.  Definition 3.8 (Other operators). Some that we shall use are:-  • Disjunction: gV h = ~>((->g) A(->h)). • Implication: g =>• h = (-^g) V h. • Sometime: Exists*/ = t U n t i l g. (Some suffix of the sequence satisfies g.) • Always: Global g = - i ( E x i s t s ->g). (No suffix of the sequence does not satisfy g, hence all must satisfy g). • Weak until: gVntilWh = (gUntil h) V (Globalg). (This doesn't demand that h ever be satisfied.) Using the operators defined above, other operators can be defined, including bounded versions of G l o b a l , E x i s t s , U n t i l W a n d U n t i l and aperiodic operator P e r i o d i c that can be used to test the state of the system periodically. Other operators — for example, periodic versions of the until operators etc. — are possible too. Two very useful derived operators are the generalised version of Next and the bounded always operator. • The generalised Next operator is defined by: Next°# = g Next g = Next (Next^) k+1  58  Chapter 3. The Temporal Logic TL  • The bounded always operator, defined by G l o b a l [(a„,  bo),...  n  , ( a , &„)]fir= n  k ]  A ( A  j=0 k=aj  Next  k  g),  asks whether g holds between ctj and bj for j = 0,... , n.  • If q = Sat(a, g) then we say that a satisfies g with truth value q, and if q r< Sat(a, g), then we say that cr satisfies g with truth value at least q. One of the key properties of the satisfaction relation is that it is monotonic. Lemma 3.4. The satisfaction relation is monotonic: for all a , a £ S", ifq= 1  d  2  Sa^a ,g) and a Q a , then 1  1  2  Sat(a ,g) 2  Proof. If g is simple, this follows since g is monotonic. Since the operators of Q (conjunction, disjunction and negation) are all monotonic with respect to their operands, the monotonicity of TL follows by structural induction. Again, for the until operator this relies on Q being a complete lattice.  •  Although the basis of the logic is G, the set of simple predicates, Theorem 3.5 shows that all monotonic predicates can be expressed in TL. If g is a TL formula not containing any temporal operators, then its semantics with respect to a sequence is determined solely by the value of thefirstelement of the sequence. This implies that we can consider such a g to be a predicate from S —> Q. Formally, overloading the symbol g, we can define g : S —>• Q by g(s) = Sat(sXX... ,g)  Chapter 3. The Temporal Logic TL  59  Theorem 3.5. For all monotonic predicates p : S -> Q, 3p' G TL such that Vs € S, p(s) = p'(s). Proof'. See Section A . l .  •  Consider the functions defined in Example 3.1, and let '_L f  when s G {s , s i , s , s } when s G { s , s } 0  4  2  5  6  a (sj = <  k  t T  when s G {s , s , s } when s = s 3  8  9  = { ( s , -L), (s 2 ,f), (s3,t), ( s , T ) } and so 0  7  9  h' is simple. Note that g = hAh'. So, al-  though g is not simple, it can be expressed as the conjunction of two simple predicates. The depth of a TL formula is a measure of how far in the future it describes behaviour of sequences; it shows how deeply nested next state operators are. Formally, if g is a TL formula, its depth, d(g) is defined by:  d(g) = 0 for 5 G G  d(g Ag ) = max{g ,g ] 1  <i(Nextg) = d(g) + 1  3.3.2  2  1  2  d(->g) = d(g)  d(gi U n t i l g ) = oo 2  Some Laws of TL  This section presents some of the algebraic laws of TL. These are used extensively in proofs and are often used in practical situations. First, the equivalence of two TL formulas is defined. Definition 3.9. lfg,he TL, then g = h if Va G S ,Sat{a,gi) = Sat(a,g ). u  2  TL obeys most of the laws of a boolean algebra (Cf and C , two of the constant simple predt  icates, are identities under disjunction and conjunction respectively). However, the inverse or  •  60  Chapter 3. The Temporal Logic TL  complementary laws do not hold (since the law of the excluded middle does not hold). Moreover, if we do consider TL as an algebra, it has a more complex structure than a boolean algebra. Lemma 3.6 (Some algebraic laws of TL). 1.  Commutativity: = 02 A  9\Ag  2  2.  0i V g  2  Associativity:  (0i V g ) V g 2  = gi V (g  3  2  3. De Morgan's 0i A g  3  2  hA{ Vg )  of A and  5. Distributivity  x  2  =  3  g A(g A x  g)  2  3  = ~ (~ gi A ->g ). ,  2  ,  2  V : 2  h\l{ Ag ) gi  =  2  (h V ) 9l  A(h V  g) 2  of Next :  Next (51 Aflf ) = 2  x  (/i A 0 i ) V (hAg ),  =  2  (g Ag )Ag  Law:  4. Distributivity 9l  V g ),  = ->{->gi V ->g ), g Vg  2  6.  = 02 V 0 i .  (Next$ri)A(Nextflf ), Next ( ^ V 52) = 2  (Next ^ ) V (Next $r ). 2  Identity:  g V C  f  =  7. Double -'-'g  =  0, 0 A C  t  = 0-  negation: 0  Proo/. See Section A. 1.3. 3.3.3  •  Symbolic Version  Describing the properties of a system explicitly by a set of scalar formulas of TL would be far too tedious. Symbolic formulas allow a concise representation of a large set of scalar formulas. A symbolic formula represents the set of all possible instantiations of that symbolic formula. TL is extended to symbolic domains by allowing boolean variables to appear in the formulas. Let V be a set of variable names { u i , . . .  , v }. n  It would be possible to define the symbolic  61  Chapter 3. The Temporal Logic TL  version of the logic by introducing quaternary variables. However, in practice, it is boolean variables which are needed, and introducing only boolean variables means that simpler and more efficient implementations of the logic can be accomplished. Furthermore, the effect of a quaternary variable can be created by introducing a pair of boolean variables. Definition 3.10 (The Extended Logic — TL). The syntax of the set of symbolic TL formulas, TL, is defined by:TL::=  G \ V \ TLATL  \ ^TL | Next TL | TL U n t i l TL  • The derived operators are defined in a similar way to Definition 3.8. For convenience, where there is little chance of confusion, the dots on TL formulas are omitted. The satisfaction relation is now determined by a sequence, a formula, tion  and  an  interpreta-  of the variables. An interpretation, cj>, is a mapping from variables to the set of constant  predicates {f, t}, Let $ = {</>: 0: V —> {f, t}} be the set of all interpretations. Given an interpretation $ of the variables, there is a natural, inductively defined interpretation of TL formulas. For a given c6 £  we extend the definition from V to all of TL by defining: <j>{g) = gifg € G </>(-•#) = -><t>(g) <t>{9i A 92) = <l>(gi) A <j>(g ) 2  c/>(Next g) = Next cf>(g)  <f)(gi U n t i l g ) = cf>(gi) U n t i l (j)(g ) 2  This can be expressed syntactically: if 4>(vi) as (j>(g) = g[bi/v ... ,b /v }. u  n  n  = hi,  2  replace each occurrence of Vi with  written  Chapter 3. The Temporal Logic TL  62  Given a sequence and a symbolic formula, the symbolic satisfaction relations, SAT , deterq  mine for which interpretations of variables the sequence satisfies the formula with which degree of truth. For example, we may be interested in the interpretations of variables for which a sequence satisfies a formula with truth value t, or the interpretations for which a sequence satisfies a formula with truth value at least t. By being able to determine for which interpretations a property holds with a given degree of truth, we are able to construct appropriate verification conditions. The scalar satisfaction relation, Sat, is used in the definition of the symbolic relations. Definition 3.11 (Satisfaction relations for TL). A number of satisfaction relations are defined. • For q = f, t, T , SAT {a,g) = {<j> € $ : q = Sat{a, <f>{g))}. q  • For q = f, t, T , SAT^g)  = {cj> e $ : q =< Sat{a, <j>{g))}.  •  Note that if g is a (symbolic) formula and 4> an interpretation, then SAT (a,g) C $, while q  Sat(a,(f)(g)) (E Q. Informally, • SAT ^(a, g) is the set of interpretations for which g and ->g hold. Such results are undeT  sirable and verification algorithms should detect and flag them. SAT (a,g) = SAT (a,g). Tt  T  • SAT (cr, g) is the set of interpretations for which g is (sensibly) true. t  SAT^a,g) = SAT (a,g) U SAT {a, g). T  t  • SATf(a, g) is the set of mappings for which g is (sensibly) false. SAT {a,g) = SAT (a,g) U SAT {er,g). n  T  {  63  Chapter 3. The Temporal Logic TL  Thus each satisfaction relation defines a set of interpretations for which a desired relationship holds. Sets of interpretations can be represented efficiently using BDDs, as is discussed in Chapter 6. 3.4  Circuit Models as State Spaces  In practice, the model-checking algorithms described in this thesis are applied to circuit models. The state space for such a model represents the values which the nodes in the circuit take on, and the next state function can be represented implicitly by symbolic simulation of the circuit. The nodes in a circuit take on high (H) and low (L) voltage values. It is useful, both computationally and mathematically, to allow nodes to take on unknown (U) and inconsistent or over-defined (Z) values. The set C = {U, L, H, Z} forms the lattice defined in Figure 1.2 on page 10. The special case of the state space being a cross-product of quaternary sets need be treated no differently than the general case (when the state space is an arbitrary lattice) as all the above definitions apply. However, it is convenient to establish some additional notation. Let S = C  n  for some n. Typically in this case 71 = {U, L, H}" (node values can be unknown or have welldefined values, but cannot be in an inconsistent state). Let G be the smallest set with the following predicates:n  • The constant predicates: f, t,  T e G\ n  • Vi € {1,... ,n}, [i] e G. Here [i] refers to the i-th component of the state space. A formula g is evaluated with respect to a state by substituting for each [i] which appears in the formula the value of the z-th component of the state. Formally,  Chapter 3. The Temporal Logic TL  when s[i] when when s[i] when • f(*)  64  U L H  Z  f;  t; • -L (s) =±;  • T(s) = T; Note that all members of G are simple and hence monotonic. The definition below of the T L n  n  is based on that of TL, replacing G with G . The set of scalar T L formulas is defined by the n  n  following abstract syntax: TL„ ::= G | TL„ A TL„ | ->TL„ | Next TL„ | T L U n t i l T L n  n  n  The semantics of T L is patterned on Definition 3.7, replacing G with G ; this is reproduced n  n  below for completeness. Definition 3.12 (Semantics of TL„). The semantics of TL„ formulas is defined by the following: 1. If g G G then Sat(a,g) = g(s ); n  2. Sat(a, gAh)=  0  Sat(a, g) A Sat(a, h);  3. Sat(a,->g) = -<Sat(a,g); 4. Sat(cr, Nextg) = Sat(a>i,g); 5. SatfagVntilh)  = V ((*A Sat(a>j,g)) A Sat(a i, h))). >  i=o j=o  ~  •  Chapter 3. The Temporal Logic TL  65  These definitions are useful because in practice properties of interest are built up from the set of predicates that say things about individual state components. Lemma 3.7 shows that restricting the basis of T L to G is not a real restriction as any simple predicate can be constructed using n  n  the operators such as conjunction. Lemma 3.7 (Power of G). If p is a simple predicate over C , then there is a predicate g € T L such that p = g . n  p  Proof.  n  p  See Section A. 1.  •  The combined impact of Theorem 3.5 and Lemma 3.7 is that the logic TL„ is powerful enough to describe all interesting (monotonic) state predicates over Q. The definition of the symbolic version of TL„ is exactly the same as the general definitions (Definitions 3.10 and 3.11), substituting G for G. n  The set of TL  n  fragment  formulas in which T does not syntactically appear is known as the  of T L . If g is a formula in the realisable fragment of T L , then Sat(a, n  n  realisable  g) = T  only if  there exists i,j such that <r,-[j] = Z. Thus, if g is a formula with this restriction, and Z does not appear in a then SAT^a,  g) = SAT (a, t  g).  This result is important since we are most interested  in the SAT relation. As shown in the next chapter, there is a good decision procedure for the t  relation SAT^: we check whether SAT^(a,g)  = SAT (cr, g), t  and thereby extend the decision  procedure to formulas in the realisable fragment of TL„ to determine the SAT relation too. t  Other Application Areas Although Q is proposed here as the basis of a temporal logic, it may have other applications in computer science. In a widely quoted and influential logic text, the White Knight says (the quote is taken from an extract dealing with names and reference):  Chapter 3.  The Temporal  Logic  66  TL  'It's long,' said the Knight, 'but it's very, very beautiful. Everybody that hears me sing it — either it brings tears into their eyes, or else —' 'Or else it doesn't you know . . . ' — [31] In his commentary on this, Heath says [79]: 'An essentially vacuous claim, since it merely sets forth the logical truism, p or not-p, embodied in the "law of the excluded middle.'" In the light of the preceding discussion, the White Knight's claim, and particularly Heath's critique can be seen to be problematic. While it will be the case that hearing the White Knight sing the song makes everyone cry or not cry, as computer scientists, we are interested in making predictions about the behaviour of a system under study. Thus the analyses of the White Knight and Heath are somewhat simplistic, and do not take into account lack of information or inconsistent information which often occur when reasoning about the world. A far more serious instance of the same error can be found in [51] where in  The Beryl  Coro-  net, Sherlock Holmes says: 'It is an old maxim of mine that when you have excluded the impossible, whatever remains, however improbable, must be the truth.' In this context, Holmes is using logic to reason about a system that inherently has partial and inconsistent information. Our knowledge about such a system must reflect this: the characterisation of propositions about the world into 'impossible' and 'truth' is, as argued earlier, an inadequate logical framework for reasoning. Given the influence of this work on an important branch of logic and deduction, it is important to show the limits of a two-valued logic. And, the notion that simple characterisations may not be appropriate was recognised in work contemporaneous with [51], in an approach that is to be preferred: 'Truth is rarely pure, and never simple' [126].  Chapter 3. The Temporal Logic TL  3.5  67  Alternative Definition of Semantics  Although in this chapter the semantics of TL formulas was given through the definition of the satisfaction relations, there are alternative ways in which the semantics could be given. A 3  method that is useful to consider here because its underlying motivation leads to an effective verification method defines the semantics by giving for each temporal logic formula the set of sequences that satisfy it. For TL the same pattern could be used, adjusting for the fact that TL is quaternary. Definition 3.13 suggests how this could be done for TL (based on [120, p. 523]). Definition 3.13 (Alternative definition of semantics). ||flf|| = {cr : t = g(so)} if g £ G. t  \\g\ Ag \\t = \\gi\\t n ||$r ||t2  2  Ihffllt = IM|f. 0iUntilflr ||t = U {<? £ S' 2  2=  £ 11fifi11 and a  : Vj : 0 < j < i, a  0  yj  1  >{  £ ||flf||}. 2  t  —  The definition of ||sr|| for values of Q other than t is similar.  •  9  If this definition were used to give the semantics, then to ask whether a satisfies g with degree q is to ask whether a £ \\g\\ . Similar definitions could be given for satisfaction 'with degree at q  least q\ Practically speaking, this definition is not useful since these sets are so large that even if onlyfinitesubsequences were considered (which is often reasonable to do) the sets would be too large to compute and represent explicitly. However, the partial order representation of the state space is extremely useful. Take as an example simple predicates. If g is a simple predicate, in general, the set of sequences for which Sat(a, g) = t will be too large to compute. However, we have seen that the defining set of g, D(g), essentially captures this information: if, for example, (s , t) and (sj, T) are defining t  pairs, and if a is an arbitrary sequence, then a £ ||g|| if s ^ cr -< s . t  3  t  0  T  The semantics are the same; it is the way that the semantics is given that differs.  Chapter 3. The Temporal Logic TL  68  In the same way as the defining sets of a simple predicates characterise the simple predicates, there are analogous structures for other TL formulas that characterise them. And in the same way the defining sets can be considered to give semantics to simple predicates when viewed as TL formulas, these analogous structures give semantics to more complicated TL formulas, and can therefore be used to test satisfaction of sequences. This is the subject of the next chapter.  Chapter 4 Symbolic Trajectory Evaluation  This chapter develops a model checking algorithm for TL. It is based on the idea raised in the last part of Chapter 3 that formulas of TL can be characterised by the set of sequences or trajectories which satisfy them. Initially, only the scalar version of TL is examined. Extension to the symbolic case is straightforward; however, there is enough extra notation and detail to make an exposition of the scalar case clearer, which overcomes the disadvantage of a little repetition to present the symbolic case. Let the model structure of the system be M = ((<S, C ), TZ, Y). S" is the set of sequences of the state space. The partial order on S is extended point-wise to sequences. Informally, the trajectories are all the possible runs of the system; formally, a trajectory, a, is a sequence compatible with the next state function: Vi>0,Y(<7,-)E^+i.  Let ST be the set of trajectories and, TZT = TZ fl ST is the set of realisable trajectories. W  TZT  represents those trajectories corresponding to real behaviours of a system.  {a ai... cr -i : a £ TZT] 0  m  l S  TZT(m)  =  the set of prefixes of TZj of length m.  Section 4.1 explores the style of verification adopted; this introduces some useful notation and definitions and guides the rest of this discussion. Section 4.2 shows that a formula of TL can be characterised by the sets of minimal trajectories that satisfy it, and furthermore shows that these sets can be used to accomplish verification. The computation of such sets is not directly  69  Chapter 4.  Symbolic  Trajectory  70  Evaluation  possible, but Section 4.3 shows that computing approximations of the sets is feasible (and as later experimental evidence will show, forms a good basis for practical verification). Finally, Section 4.4 generalises the work to the full symbolic logic. 4.1 Verification with Symbolic Trajectory Evaluation The style of verification used in symbolic trajectory evaluation (STE) is to ask questions of the form: Do all trajectories that satisfy g also satisfy hi The formula g is known as the  antecedent,  and the formula h is known as the  consequent.  'Satisfy' is a broad term — there are a number of satisfaction relations that can be used. Which one matches our notion of correctness? There are a number of possible ways of modelling correctness, and the key issue is how to deal with inconsistent information. How correctness is modelled depends on choices the verifier makes — although guided by technical considerations, the verifier has considerable flexibility. There are two obvious ways to formalise the notion of 'trajectory a (successfully) satisfies g'.  t =  Sat(a,g)  t ^ Sat(a,  g)  (4.1) (4.2)  Relation (4.1) captures a more precise notion — successful satisfaction describes a situation where inconsistency has not caused a predicate to be true of a trajectory. Intuitively, it is a better model of satisfaction than Relation (4.2). However, the latter definition has some advantages: it does capture some useful information; most importantly, as shown later, there is an efficient model checking algorithm using Relation (4.2); and it is often practical to infer the former relation from the latter one. For this reason, we concentrate, for the moment, on the second choice.  Chapter 4. Symbolic Trajectory Evaluation  71  Corresponding to these two definitions, there are two ways of asserting correctness with respect to a formula. Definition 4.1. g=$>h if and only if Vcr € TZT, t = Sat(cr, g) implies that t = Sat(a, h). and Definition 4.2. g=^>h iff V<T G ST, t ^ Sat(a, <f>(g)) implies that t ^ Sat(a, <j>(h)). Thefirstdefinition takes a very precise view of realisability. First, we only consider realisable trajectories — if there are unrealisable trajectories with strange behaviour, then these are ignored. Moreover, by this definition a sequence satisfying a formula with degree of truth greater than t (i.e. with degree T ) is undesirable. In practice, the model checking algorithm will check in addition that there are some realisable trajectories which satisfy the antecedent (i.e. that the verification assertion is not satisfied vacuously). I submit that this definition best captures the notion of correctness. The second definition takes a more relaxed view of inconsistent behaviour. We consider the behaviour of all trajectories, whether realisable or not, and treat the truth values t and T as satisfying the notion of correctness. Although, this definition is not as good a model of correctness as thefirst,it has the advantage that there is an efficient verification method for it. Therefore, for pragmatic reasons, we concentrate at first on Definition 4.2, which will be central in the development of the next three sections. These sections show how an efficient verification methodology for correctness assertions based on this definition can be developed. The last part of Section 4.4 shows that for circuit models, this methodology can be used to infer correctness assertions based on Definition 4.1.  72  Chapter 4. Symbolic Trajectory Evaluation  4.2 Minimal Sequences and Verification This section first formalises the notion of the sets of minimal trajectories satisfying formulas, and then shows how these sets can be used in verification. The first definition is an auxiliary one: given a subset of a partially-ordered set, it is useful to be able to determine the minimal elements of the set. If B is a subset of A, then b £ B is a minimal element of B if no other element in B is smaller than b (i.e. all elements of A smaller than b do not lie in B). Definition 4.3. If A is a set, B C A, and C a partial order on A, then mini? = {b £ B : if 3a £ A 9 a C b, either a = 6 or a £ } .  • Definition 4.4. If 5 is a (scalar) TL formula, then min g is the set of minimal trajectories satisfying g, where ming is defined by: ming = min{cr £ ST • t ^ Sat(cr, g)}  •  Note that if min g C min /i, then every trajectory that satisfies g also satisfies /i. For suppose a satisfies g: then there must exist a' £ ming such that t •< Sat(a',g)  and a' • a; but since  ming C minh, a' £ min/i and hence t •< Sat(a', h); hence by monotonicity t -< Sat(a, h). This gives some indication that manipulating and comparing the sets of minimal trajectories that satisfy formulas can be useful in verification. Although we will be comparing sets of sequences, containment is too restrictive, motivating a more general method of set comparison. The statement 'every trajectory that satisfies g also satisfies k" implies that the requirements for g to hold are stricter than the requirements for h to hold. Thus, if a is a minimal trajectory satisfying g, a must satisfy h. Since the requirements  73  Chapter 4. Symbolic Trajectory Evaluation  for g are stricter than the requirements on h, a need not be a minimal trajectory satisfying h, but there must be a minimal trajectory, a', satisfying h where a' C a. This is the intuition behind the following definition, which defines a relation over V(S), the power set of S. Definition 4.5. If S is a lattice with partial order C and A, B C <S, then A C ? B i f V & G B , 3 a G A such that a C 6.  •  To illustrate this definition, consider the example of Figure 4.1. Assume A and B are subsets of some partially ordered set, S. Note that in this example that both A and B are upward closed. Although the definitions given here do not require this, we will be dealing with upward closed sets. Figure 4.1(a) depicts A. Let A 1  m  = mm A = {a, 8,7, (} be the set of minimal elements  of A. Then A consists of all the elements above the dotted line. Similarly, Figure 4.1(b), depicts B. Let B  m  = min B = {n, 7} be the set of minimal elements of B. Figure 4.1(c) is the  superposition of Figures 4.1(a) and (b). Note that A C p B . For each element of B there is an element of A less than or equal m  m  m  m  to it: a C 77 and 7 C 7 . Suppose A is the set of elements with property g, and that B is the set of elements with property h. Then m'mg = A  m  and m'mh = B . By examining the figure it is easy to see m  that all elements of S that have property h also have property g (h implies g). But, note that m'mh % m i n H o w e v e r , it is the case that ming  min/i, which motivates exploring the  C p relation further. We will be manipulating sets of trajectories and sequences that satisfy formulas; that these are upward closed follows from the monotonicity of the satisfaction relation (Lemma 3.4). J  74  Chapter 4. Symbolic Trajectory Evaluation  A  /  B  /5  7  £  7  7  a  C  a  (c) A and 5  (b)S  Figure 4.1: The Preorder Lemma 4.1. If 5 is a lattice with partial order • , then  is a preorder (i.e., it is reflexive and transitive).  Proof. Reflexivity follows directly from the reflexivity of C . Suppose that A\Z B and B \Zj> C, and let c 6 C. V  (1)  36e#9&Cc  (2)  3a  e  A3  aH b  £ Q p C : Vce C, 3b £ B 9 & C c A\Z  V  B: Mb G B,  3a  (3) a C c  C is transitive  (4) AQ-p C  Since c was arbitrary.  € A 9 a C6  •  Note that if 5 C A, then AQ? B. The following theorem shows the importance of the definition of C-p . Theorem 4.2. If g and /t are TL formulas, then g=g>/j. if and only if min h C p ming.  Chapter 4. Symbolic  Trajectory  75  Evaluation  Proof.  By the definition of minimal sets, if t ^  Sat(a, g),  <^=> Vo- £ ST,  g=^>h  there exists a'  £ ming  t ^ Sar(cr, g) implies that t  Vcr' £ m i n g •<=>• V a ' £ m i n g  implies that t implies 3cr"  -< Sat(a',  £ min/i,  with a'  -< Sat(a,  C cr.  h)  h)  with a"  C cr  •<=>• m i n / i C p m i n g  • Although computing the minimal sets directly is often not practical, it is possible to find approximations of the minimal sets (they are approximations because they may contain some redundant sequences). The next section shows how to construct two types of approximations to the minimal sets. and ^(g) A*(/i)  A (/z) fc  is an approximation of the set of minimal  sequences  that satisfy  h,  is an approximation of m i n g . The importance of these approximations are that (i)  C p r (^r) t  exactly when g^^>h  (an analogue of Theorem 4.2), and (ii) there is an effi-  cient method for computing these approximations, which we now turn to. 4.3 Scalar Trajectory Evaluation The method of computing the approximations to the minimal sets of formulas is based on symbolic trajectory evaluation (STE), a model checking algorithm for checking partially-ordered state spaces. The original version of STE was first presented in [25] and a full description of STE can be found in [116]. In these presentations, the algorithm is applied only to trajectory formulas, a restricted, two-valued temporal logic. This chapter generalises earlier work in two important respects. 1. It presents the theory for applying STE to the quaternary logic. 2. It presents the theory for the full class of TL. In particular it deals with disjunction and negation.  Chapter 4. Symbolic Trajectory Evaluation  76  This section examines the scalar version of TL and shows how given a TL formula, a set of sequences characterising the formula can be constructed. Recall the definition of defining pair and defining set from Section 3.3.1. The defining set of a simple predicate characterises that predicate; this can be used as a building block to find a characterisation of all temporal predicates. By using the partial order representation, an approximation of the minimal sequences that satisfy a formula can be used to characterise a formula. These sets are called defining sequence sets. Practical experience with verification using STE has shown that there are many formulas that have small defining sequence sets. This section shows how to construct defining sequence sets using the defining pairs of simple predicates as the starting point. The defining sequence sets of a formula are a pair of sets where the first set of the pair contains those sequences, a, for which t ^ Sat(a, g), and the second set contains those sequences for which f •< Sat(a, g). These sets are constructed using the syntactic structure of TL formulas. If a formula is simple its defining sequence sets are constructed directly from the defining set of the formula. For compound formulas, these sets are constructed by performing set manipulation described below. As manipulating sets of sequences is very important, first we build up some notation for manipulating and referring to such sets. Definition 4.6 (Notation). If A and B are subsets of a lattice C on which a partial order C is defined, then A II B = {a U b : a £ A,b G B}. If g: C —> C, g(A) continues to represent the range of g, and similarly, g((A, B)) = (g(A),g(B)}.  •  Note that we write AII B rather than A U B since although AII B is a least upper bound (with respect to C p ) of A and B it is not the least upper-bound (this reflects the fact that C p is a preorder not a partial order).  Chapter 4. Symbolic Trajectory Evaluation  77  The two fundamental operations used are join and union, and it is worth discussing how they are used. First, if we know how to characterise sequences that satisfy gi and those that satisfy g , how do we characterise sequences that satisfy gi A g l Let q £ Q and suppose that 2  2  cr and cr are the weakest sequences such that q -< Sat(a ,gi). Let a 1  2  l  = a U a . Clearly,  3  1  2  q •< Sat(a ,gi A g ). Moreover, supposeg -< Sat(a',gi A g ), then it must be that q •< Sat(a',gi) J  2  2  and q •< Sat(a', g ). Thus a Qa' and a • a' since the a are the weakest sequences such that 1  2  1  2  q -< Sat(a\gi). But, since a = cr U cr , a C cr'. Thus cr is the weakest sequence satisfying 3  1  2  3  J  01 Ac/2.  What about characterising sequences that satisfy giVg l 2  At first it may seem that this is  analogous, and we should just use meet instead of join. However, this is not symmetric: since we are characterising a predicate by the weakest sequences that satisfy it, taking the meets will lose information. While it will be the case that if q -< Sat(cr',gi V g ) then a n cr X cr', the 1  2  2  converse does not hold in general. This means that to characterise gi V g we need to use both 2  cr and cr . 1  2  Since the law of the excluded middle does not hold in the quaternary logic, we need to characterise both the sequences that satisfy a predicate with value at least t and those that satisfy a predicate with value at least f . Definition 4.7 (Defining sequence set). Let# £ TL. Define the defining sequence sets of g as A(g) = (A (g), A (g)), where the A (g) t  f  are defined recursively by: l.If g is simple, A (g) - {sXX... : (s, q) 6 D , or (s, T) £ D }. This says that provided q  g  g  a sequence has as its first element a value at least as big as s then it will satisfy g with truth value at least q. Note that A'(^) could be empty. 2.AG/1 V g2) = (A'Ofc) U A}(g ), A (9l) 11 A (g2)} f  2  f  q  Chapter 4. Symbolic Trajectory Evaluation  78  Informally, if a sequence satisfies gV h with a truth value at least t then it must satisfy either g or h with truth value at least t. Similarly if it satisfies g V h with a truth value at least f then it must satisfy both g and h with a truth value at least f. 3.A(<7! Ag ) = (A'CsO II A (g ),A (g ) t  f  2  2  U  1  A (</ )) f  2  This case is symmetric to the preceding one. 4 . A ( ^ ) = ( A ( r ) , A (9)) f  t  5  This is motivated by the fact that for q = f, t, a satisfies g with truth value at least q if and only if it satisfies ->g with truth value at least ->q. 5. A(Next g)  = Xs s\  ...)  0  0  ...  satisfies Next g with truth value at least q if and only if sis . . . satisfies g with  s sis ... 0  A(g), where shift(s Si  shift  —  2  2  at least value q. 6. A(  Untilg ) = (A ( k  gi  2  gi  Until# ), A ^ Until£r )), where 2  2  •A*(>i Untilg ) = U g ( A ( N e x t V ) I I . . . II A ^ N e x t ^ ) ^ ) II A^Next '^)) t  2  1  8  0  • A ( c / i U n t i l e ) = n ~ ( A ( N e x t ° ^ ) U . . . U A ( N e x t ^ - ) ^ ) U A (Next^ )) f  f  f  0  1  1  f  2  Recall that Next g = g if k = 0 and Next g = Next Next^ ^ otherwise. Here we k  k  -1  consider the until operator as a series of disjunctions and conjunctions and apply the motivation above when constructing the defining sequence sets. Note that it may be that 8 , S € A (g) where 8 C S . As a practical matter it would be prefer1  2  q  1  2  able for only 5 to be a member of A (g). However, this redundancy does not affect what is 1  presented below.  q  79  Chapter 4. Symbolic Trajectory Evaluation  An important consequence of Definition 4.7 is that for each formula g of TL, A(g) characterises g: all sequences that satisfy g must be greater than one of the sequences in A (g). The t  lemma below formalises this (the proof is in Section A.2). Lemma 4.3. Let g e TL, and let a G <S . For q = t, f, q X Sat{a, g) iff 3S £ A % ) with S C a. W  9  9  4.3.1 Examples The constant predicates have very simple denning sequence sets.  A(t) = ({XX..},0) A(f) =  A(1) = (00 ,)  (0, {XX... })  A ( T ) = ({XX... }, {XX... }}  Every sequence satisfies the predicate t with truth value t, and no sequence satisfies the predicate t with truth value f or T . Similarly, no sequence satisfies f with truth value at least t, while all sequences satisfy f with truth value f. Note that A(t) = A(-if) (indeed, it would be disconcerting if this were not the case). Example 4.1. Suppose that A(g) = ( A , B ) and A(h) = ( A , B ) . Then, g  A(g  V h) = A ( i ( - > # A A(-i(->g A  g  -i/i)).  h  h  To facilitate the proof, let rev{A,  ~~~>h))  —  -  f  f  rev(A (g)UA (h),A (g)UA (h))  =  (A (g)UA (h),A (g)UA (h))  t  t  A(gVh)  A).  II A\->h), A (-*g) U A(->h))  =  =  {B,  ->h)  rev A{->g A  = rev(A\-^g)  B)  r  t  t  f  t  f  •  Chapter 4. Symbolic Trajectory Evaluation  80  Example 4.2. x = y is short for (a; A y) V ((-"a;) A(->y)). A(x = y)  = (A*(xAy V (-xA-y)), A (xAy V f  (-.IA-"!/)))  = ( A ( x A y ) U A*((-.xA-.y)), k  A ( x A y ) II A (-.xA--y)) f  f  = ((A'(x) H A (y)) U (A'(-.x) II A ' ^ y ) ) , k  (A (x) U A (y)) H (A (-.x) U A (-.y))> f  =  f  f  f  ((A (x)nA (y))U(A (x)nA (y)), t  t  f  f  (A (x)UA (y))n(A (x)UA (y))) f  f  t  t  • Example 4.3. Let <S* = C ; this models a circuit with three state holding components. The formula g = 3  (([1] V [2]) = f) asks whether it is true that neither component 1 nor component 2 has the H value.  A(f) = <0,{(U,U,U)...}> A([1]) = ( { ( H , U , U ) ( U , U , U ) . . . } , { ( L , U , U ) ( U , U , U ) . . . » A([2]) = ( { ( U , H , U ) ( U , U , U ) . . . } , { ( U , L , U ) ( U , U , U ) . . . » A([1]V[2])= ({(H,U,U)(U,U,U)... ,(U,H,U)(U,U,U)...}, {(L,L,U)(U,U,U)...}} A ([l] V [2]) = f) = A ([l] V [2]) II 0 U A ([l] V [2]) II {(U, U, U)... } 4  k  f  = A ([l] V [2]) f  = {(L,L,U)(U,U,U)...}  Chapter 4. Symbolic Trajectory Evaluation  81  A ([1] V [2]) = f) = (A ([1] V [2]) U {(U, U, U ) . . . }) II (A'([1]V.[2].)) (A'([l] V [2]) U 0) ([l]V[2])U{(U,U,U)...}) f  ff  =  ({(L,L,U)(U,U,U)...}U{(U,U,U)...})  II {(H,U,U)(U,U,U)... ,(U,H,U)(U,U,U)...} =  {(L,L,U)(U,U,U)... n  =  ,(U,U,U)...}  {(H,U,U)(U,U,U)...  {(Z,L,U)(U,U,U)...  ,(U,H,U)(U,U,U)...}  ,(L,Z,U)(U,U,U)... ,  (H,U,U)(U,U,U)... ,(U,H,U)(U,U,U)...}  Note that A'([l] V [2]) \Z  (A ([1] V [2]) U {(U, U, U ) . . . }) II (A*([l] V [2])) showing the f  V  •  redundancy in defining sequence sets. 4.3.2  Defining Trajectory Sets  The defining sequence sets contain the set of the minimal sequences that satisfy the formula. It is possible to find the analogous structures for trajectories — we can find an approximation of the set of minimal trajectories that satisfy a formula. This section first shows how, given an arbitrary sequence, to find the weakest trajectory larger than it. Using this, the defining trajectory sets of a formula are defined. Finally, Theorem 4.5 is presented, which provides the basis for using defining sequence sets and defining trajectory sets to accomplish verification based on Definition 4.2. Definition 4.8. Let G = s sis 0  2  Let r(<r) = t tit ... 0  2  where: when i = 0  Y(ij_i) U Si  otherwise  82  Chapter 4. Symbolic Trajectory Evaluation  t tit ... 0  2  is the smallest trajectory larger than o. s is a possible starting point of a trajectory, 0  so t = so- Any run of the machine that starts in s must be in a state at least as large as Y ( s ) 0  0  0  after one time unit. So t must be the smallest state larger than both s and Y ( s ) . By definition x  x  0  of join, ti = Y ( s ) U Si = Y ( t ) U si. This can be generalised to U = Y(U-i) U s,-. 0  0  • In the same way that there is a set of minimal sequences that satisfy a formula, there is a set of minimal trajectories that satisfy a formula. A set that contains this set of minimal trajectories can be computed from the defining sequence sets. The defining trajectory sets are computed by finding for each sequence in the defining sequence sets the smallest trajectory bigger than the sequence.  Definition 4.9 (Defining trajectory set). T(g) = (T^g), T (g)), where T«(g) = {r(a) : a £ A % ) } .  •  f  Note that by construction, if T £ T (g) then there is a 5 £ A (g) with S C T . T(g) char9  q  9  q  g  9  acterises g by characterising the trajectories that satisfy g. This is formalised in the following lemma which is proved in Section A.2.  Lemma 4.4. Let g £ T L , and let a be a trajectory. For q = t,f,q^ with r Qa. 9  Sat(a,g) if and only if 3T £ T (g) 9  q  •  The existence of defining sequence sets and defining trajectory sets provides a potentially efficient method for verification of assertions such as g=^>h. The formula g, the antecedent, can be used to describe initial conditions or 'input' to the system. The consequent, h, describes the 'output'. This method is particularly efficient when the cardinalities of the defining sets are small. This verification approach is formalised in Theorem 4.5 (which is proved in Section A.2). Section 4.1 showed how this result is used in practice. Recall that these antecedent, consequent  Chapter 4. Symbolic Trajectory Evaluation  83  pairs are called assertions. Theorem 4.5.  If g and h are TL formulas, then A}(h) Q-p T*(<7) if and only if g=^>h.  •  Some formulas have small defining sequence sets with simple structure. Definition 4.10.  If g £ TL, and 38 G A\g) such that VcT £ A*(flf), S C S, then 8 is known as the defining 9  9  9  sequence of g. If the 5 is the defining sequence of g, then T = T(5 ) is known as the defining 9  9  9  •  trajectory of g.  Finite formulas with defining sequences are known as trajectory formulas. Seger and Bryant characterised these syntactically (see Section 2.4). Two useful special cases of Theorem 4.5 should be noted. First, if A is a formula of TL with a well-defined defining sequence 5 , and h G TL, then A  G A (/i), 5 C r if and only if, for k  A  every trajectory a for which t •< Sat(a, A) it is the case that t X Sat(cr, h). Second, let A and C be formulas of TL with well-defined defining sequences 5 and 5 . A  C  Then 8 C r if and only if, for every trajectory a for which q •< Sat(cr, A) it is the case that C  A  q X Sat(a, C). This is essentially the result of Seger and Bryant generalised to the four valued logic. 4.4  Symbolic Trajectory Evaluation  The results of Section 4.3 can easily be generalised to the symbolic version of TL. The constructs used in the previous section such as defining set and so on all have symbolic extensions. Each symbolic TL formula is a concise encoding of a number of scalar formulas; each interpretation of the variables yields a (possibly) different scalar formula. To extend the theory of  84  Chapter 4. Symbolic Trajectory Evaluation  trajectory evaluation, symbolic sets are introduced; these can be considered as concise encodings of a number of scalar sets. Symbolic sets can be manipulated in an analogous way to scalar sets. Using this approach, the key results presented above extend to the symbolic case. This section first presents some preliminary mathematical definitions and generalisations and then presents symbolic trajectory evaluation. The verification conditions are extended to the symbolic case. Given two symbolic formulas g and h we are interested in for which interpretations, (f>, it is the case that for all trajectories, a, if u satisfies g, then a also satisfies h. Again, both the =g> and ==t> relations are considered. 4.4.1  Preliminaries  Definition 4.11.  \g=Z>h)=  {(f> e  Vcr <E 1Z ,t = Sat(a,(f>(g)) implies that t = Sat(a,(f)(h))}. T  •  Ideally such verification assertions should hold for all interpretations of variables. Definition 4.12.  (= <] g=t>h > [ denotes (\g=s>h [>= $.  •  Note that |= <] g=S>h \ if and only if Vcr £ 7l ,SAT (a, g) C SAT (a,h). An alternative T  t  t  approach is to treat inconsistency more robustly (which is what happens in STE defined on a two-valued logic). We could use these definitions. Definition 4.13.  <] g=^>h [> = {(f) € $ : Vcr e S , t ^ Sat(a, <f>(g)) implies that t ^ Sat(a, (f>(h))} r  and  •  Chapter 4. Symbolic Trajectory Evaluation  85  Definition 4.14.  |= <] g=$>h > [ denotes <] g==$>h $.  •  Note that |= <] g=i>h ) if and only if Va £ <S, SAT^a, g) C SAT^ (a, /*)). r  Symbolic sets are now introduced. Definition 4.15.  Define £, the boolean subset of TL, by £ ::=t | f | V | £ A £ | - 5 This definition is used in this chapter and Chapter 5.  • Recall that an interpretation of variables can be considered as a function mapping a symbolic TL formula to a scalar one. In particular, if <j> is an interpretation of variables, and a 6 £, 4>(a) £ {f,t}. In what follows, let S be a lattice over a partial order C ; this induces a lattice structure on <S ; in turn, C p is the induced preorder on V(S ) defined earlier. W  W  Definition 4.16.  A symbolic set over a domain 7^(5^) is one of 1. A £ ViS"); 2. a -» A where a £ £; 3. A i U A , where A i , A are symbolic sets; 2  2  4. A i h A , where A i , A are symbolic sets; or 2  2  5. AiIIA , where A , A are symbolic sets. 2  x  2  •  86  Chapter 4. Symbolic Trajectory Evaluation  Each symbolic set represents a number of sets. Each interpretation of variables, yields a scalar set contained in V(S  ). Given an interpretation of variables, there is a natural interpre-  W  tation of symbolic sets, given below.  Definition 4.17. Let 0 G $ be given.  1. <j>{A) = A for all A G V(S ); W  2. <f>(a -> A) = {x G A : 0(a) = t}. Thus if a evaluates to t, then a -> A is the set A, otherwise it is the empty set. 3.  A JI B is defined by 0(A L\B) = 0(A) II <f>(B).  4. I f / : V(S ) UJ  -» 7 (<S ), then the symbolic version of / is defined by  m  ?  u;  0 ( / ( A , . . . , A ) ) = /(cA(A ),...,0(A )). 1  n  1  ri  These definitions can be used in extending set operations such as set union, as well as for more general functions, for example in extending Definition 4.9 to give a definition of symbolic defining trajectory sets.  5. A t  v  B = {(j) G $ : 0(A) n <f>(B)}. v  • The following lemma shows that these definitions are sensible.  Lemma 4.6. Let A,  B, C be symbolic sets over domain V(S ).  1. (AtAlIB)  W  = $.  2. If (A C C) = $ and ( £ t C) = $, then (ARB E C ) = $.  87  Chapter 4. Symbolic Trajectory Evaluation  Proof. Let 0 be arbitrary. (1) 0(A) C 0(A) II 0(B)  Property of II.  (2) 4(A) n <f>(ATl B)  Definition.  Since 0 is arbitrary part 1 follows.  •  Hypothesis 2.  0(A), 0(B) C 0(C)  (3)  (4) 0(A) II 0(B) • 0(C)  From (3), by property of join.  (5) 0(ALIB) C0(C)  Definition.  Since 0 is arbitrary, part 2 follows. 4.4.2 Symbolic Defining Sequence Sets Given this mathematical machinery, symbolic defining sequence sets can now be defined. The definition of defining sequence sets (Definition 4.7) must be extended by using the symbolic versions of set union and join etc. In addition, one more part must be added to the definition to take into account formulas of TL containing variables. For completeness, this definition is given below. Definition 4.18 (Extension to Definition 4.7). Let g € TL. Define the symbolic defining sequence sets of g as A(g) = (A (fif), A (g)), where t  the A (g) are defined recursively by: g  1. If g is simple, A"(g) = {sXX ... :(s,q)e D , or (s, T ) £ Dg}. b  2. A( V g ) = (A'teOuA'fo), A(&)nA(fli)) f  gi  f  2  3. A( Ag ) = (A ^i)liA (0 ),A ^ )UA ^ )) t  gi  4. A(^)  f  t  2  f  2  1  2  = (A (flf),A (^)> ,  t  5. A(Next g) = shiftA(g) 6. A(  gi  Until5f ) 2  = (A^flf! U n t i l g ),A ( f  2  gi  U n t i l g )), where 2  f  88  Chapter 4. Symbolic Trajectory Evaluation  • A^firi U n t i l g )  = U^ 0 (A (Next ^ 1 )LI.. t  2  0  . IlA^Next^-^UA^Next^))  • A (^i U n t i l $r ) = H^ (A (Next°<7i)U • • • uA (Next( )$fi)UA (UexVg )) f  f  f  i_1  f  0  2  2  7. Ifu € V, A(v) = {v-+ { X X . . . },--«'-)• { X X . . . } ) . Ifcf>(v) = t,then 0(A(u)) = A(t), and if <f>(v) = f, then </>(A(u)) = A(f).  The extension of the definition of defining trajectory set is straightforward. Definition 4.19 (Symbolic denning trajectory set). f<(g) =  i(A(g)).  The main result of symbolic trajectory evaluation is based on Theorem 4.5. It says that the set of interpretations t\ g=^>h \ (those interpretations, 4>, such that <f>(g)=^xf>(h). is exactly the same as the set of interpretations for which A (h) C. T*(<7). So, if we can compute one, then t  T  we can compute the other. Theorem 4.7.  Let g, h be TL formulas.  <] g=^h  |>= (A(A) k  t  v  t^g))  Proof. (J g=$>h > | =  {4 € $ : Vcr e ST, t •< Sat(cr, (b(g)) implies that t -< Sat(a, (f)(h))}  = {(f) G $ : A*(<£(fc)) Cp T\(f){g))}  = A* (A) t  v  f\g)  (Theorem 4.5)  (By definition.)  This is an important result, because efficient methods of computing these symbolic sets and performing the trajectory evaluation exist, and have been implemented in the Voss tool discussed in Chapter 6. This forms the basis of the verification presented here.  Chapter 4. Symbolic Trajectory Evaluation  89  4.4.3 Circuit Models When S = C and TZ = {U, L, H} , and only the realisable fragment of TL„ (those T L forN  71  n  mulas not syntactically containing T ) are considered, computing these verification results is simplified. In the rest of this section we only consider the realisable fragment of TL„. Two important properties of the realisable fragment of TL are given in the next lemma. N  Lemma 4.8. ^ Z).  1. cr G TZT if and only if Z does not appear in cr (for all 2. If cr e TZT and g is in the realisable fragment of TL„, SAT (a,g)  = 0 and SAT (a,g)  T  t  =  SAT^g)  Proof. The proof of (1) comes from the definition of TZ. For (2), recall from Section 3.4 that Sat(cr,  g) =  T only if g contains a subformula g' e G which is either the constant predicate T or if Z n  appears in a.  •  We compute |= ^ g=t>h T (g) t  C TZT  as follows. First, compute ^(g).  It is easy to determine whether  using Lemma 4.8(1). If not, then there are inconsistencies in the antecedent which  should beflaggedfor the user to deal with before verification continues. Thus we may assume that T\g) C TZ . T  \= 4 9=^h Vcr =Wcr  \ =  e  S  e  7e , (SA7V(cr,  =^Vcr €  T  ,  {SATtt(a,  g) C SAT^a,  r  7e , (SA7t(<r, g) T  C SATtt(a,  h)) h))  C 5 A T ( c r , />)) t  (By definition) (since 7e  r  (By Lemma  C  S ) T  4.8(2).)  This result is useful because in this important special case, efficient STE-based algorithms can  Chapter 4. Symbolic Trajectory Evaluation  90  be used. The rest of this thesis uses this result implicitly. The main computational task is to determine f= §  g=^>/i  \. By placing sensible restrictions on the logic used and checking for  inconsistency in the defining trajectory set of the antecedent, we can then deduce |= \ g=s>h ) from|=  $ =£>h). g  Chapter 5 A Compositional Theory for TL  5.1  Motivation  Although STE is an efficient method of model checking, it suffers from the same inherent performance problems that other model checking algorithms do. Enriching the logic that STE supports, as is proposed in previous chapters, potentially exacerbates the problem. A primary thesis of this research is that a compositional theory for TL can overcome performance limitations of automatic model checking. Compositionality provides a method of divide-and-conquer: the problem can be broken into smaller sub-problems, the sub-problems solved using automatic model checking, and the overall result proved using the compositionality theory. This chapter presents the compositional theory for trajectory evaluation, which is a set of sound inference rules for deducing the correctness of verification assertions. Chapter 6 discusses the development of a practical tool that can use this compositional theory — this allows the use of an integrated theorem prover/model checker with useful practical implementations. As discussed in Chapter 1, the focus of this theory is property composition: Section 5.2 presents compositional rules for TL; Section 5.3 presents additional compositional rules for T L ; and practical considerations are presented in Section 5.4. Structural composition is briefly n  discussed in Section B . l .  91  Chapter 5. A Compositional Theory for TL  5.2  92  Compositional Rules for the Logic  This section presents the main compositional theory with each rule being presented and proved in turn. The compositional theory is developed for the =^> relation. In general, the theory does not apply to the =D> relation since the disjunction, consequence, transitivity and until rules do not hold for this relation. However, the other composition rules do apply for =t> (in the related theorems below, replacing =g> with = o and ^ with =, and considering only trajectories in TZr will yield the desired result). Moreover, as shown in Section 5.3, the full compositional theory does hold for the =r> relation for the important realisable class of T L  n  The circuit shown in Figure 5.1 will be used in the rest of this section to illustrate the use of the inference rules. The circuit is very simple and can easily be dealt with directly by STE, but the smallness of the circuit helps the clarity of the example. A unit-delay model is used for inverter and gate delays. Notation: [B] is the simple predicate which evaluates to T when the state component B has the value Z, t when B has the value H, f when B has the value L, J_ when B has the value U. Note that except for the Specialisation Rule, all of the proofs are for the scalar case only as this simplifies the proof. However, as a symbolic formula is merely shorthand for a set of scalar formulas, the rules for the symbolic case follow directly in all cases.  Figure 5.1: Example  Chapter 5. A Compositional Theory for TL  5.2.1  93  Identity Rule  This rule is a trivial technical rule. However, it turns out to be useful in practice where a sequence of inference rules will be used to perform a verification. In the practical system which implements this theory, the proofs are written as a program script, and this rule is useful to initialise the process. Its advantage is that it makes the program slightly more elegant. Theorem 5.1.  For all g € TL, g=^>g. Proof. Let t •< Sat(a,g). Clearly then t •< Sat(a,g). Hence g=^>g. 5.2.2  •  Time-shift Rule  The time-shift rule is important because it allows abstraction from the exact times things happen at. This may reduce the amount of detail that the human verifier will have to deal with, and more importantly, allows verification results to be reused a number of times. In practice this is very important in making verification efficient. Lemma 5.2.  Suppose c/=g>A. Then Next g=^>Next h Proof. Let a = s sis ...  be a sequence such that t •< Sat(a, Next g).  (1) t -< Sat(a>i,g)  By definition of the satisfaction relation.  (2) t ^ Sat(a>i,h)  Sinceg==$>h.  0  (3)  2  t X 5a?(a> ,Next/i) 0  Definition of satisfaction of Next.  (4) Thus Next g==$>Next h.  • Theorem 5.3 follows directly from Lemma 5.2 by induction.  Chapter 5. A Compositional Theory for TL  94  Theorem 5.3.  Suppose g=$>h. Then Vi > 0, Next*c/=^>Next'/i. Example 5.1.  In the circuit of Figure 5.1, using STE, it can be shown that [5]=g>Next (-• [£)]). Using Theorem 5.3, we can deduce that Vi > 0, Next*[£]==§> Next  (-'[£]).  The requirement of Theorem 5.3 that i > 0 is necessary: in general it does not hold when i < 0. For example, in our circuit we can prove that Next [D]=g>Next [F]. However, it is not the 1  2  case that Next°[Z>]=^>Next [ E]. In the former case, the node C has the value H at time 1 1  J  because A is connected to ground; at time 0 we know nothing of the value of C. 5.2.3  Conjunction Rule  Conjunction and disjunction allow the combination of separately proved results. This is particularly useful where properties of different parts of the system being verified have been proved and need to be combined. Given two results <7I=^>/JI and g ^^>h , the two antecedents are 2  2  combined into one antecedent and the two consequents are combined into one consequent. Using the conjunction rule, combination is done using the A operator, and using the disjunction rule, combination is done using the V operator. There is no need for the gi and hi to be 'independent', i.e. they can share common sub-formulas. Theorem 5.4.  Supposeg =^>h andg =^>h . l  i  2  Then gi A g =^hi A h . 2  2  2  Chapter 5. A Compositional Theory for T L  Proof. Let a £  95  and suppose t ^ Sat(a, gxAg2).  (1) t •< Sat(a,g ) A Sat(a,g ) x  Definition of Sat(a, g A g ).  2  x  (2)  t r< Sat(a,gi), i = 1,2  Lemma 3.2(2).  (3)  t ^ Sat(a,hi), i = 1,2  Since gi=^>hi,  (4)  t X Sat(a, hi) A Sat(a, h2)  (5)  t ^ Sat(a,hiAh2)  2  i — 1,2.  Lemma 3.2(2). Definition of Saf(<r, /ti A /i ). 2  As cr is arbitrary, c/i A c/2 =^>/ii A / i -  Ll  2  Example 5.2. In the circuit shown in Figure 5.1, we can show using STE that - . [ £ ] = » N e x t [D]  - i [ A ] = » N e x t [C\. Using Theorem 5.4 we have that:  ,[A] A ~>[B] =^>Next [C] A Next [£>].  5.2.4  Disjunction Rule  Theorem 5.5. Supposeg\=^hi andg ^^>h . 2  2  Then #i V g =^>h V / i . 2  2  x  Proo/i Let c £ S and suppose t ^ Sat(a, gx V g ). w  2  (1) t ^ Sat(a,gx)\J Sat(a,g )  Definition of Sat(a,g V c/ ).  2  x  (2)  t X Sat(a,gi), fori = 1 orz = 2  Lemma 3.2(1).  (3)  t r< 5a?(cr,  Since g =g>/j , i = 1,2.  (4)  t X Sat(a,hx)y Sat(a,h2)  Lemma 3.2(1).  (4)  t ^ Sat(cr,hiV h2)  Definition of Sat(a, hx V  for i - 1 or i = 2  As a is arbitrary, #i V g =^h 2  x  \J h . 2  i  2  i  /J ). 2  •  Chapter 5. A Compositional Theory for TL  96  Example 5.3.  In the circuit in Figure 5.1, we can use STE to show that ->[D]=^Tlext->[E] -.[C]=»Next-•[£].  Using Theorem 5.5, we have that [->[D] V -n[C])=»Next-•[£]. Although the consequents of both premisses used here in the disjunct rule are the same, in general they may be different. 5.2.5  Rules of Consequence  The rules of consequence have two main purposes: • Rewriting antecedents and consequents into syntactically different but semantically equivalent forms (see Example 5.4); • Removing information which is not needed for subsequent steps in the proof so as to reduce clutter (see Example 5.5). The next lemma is an auxiliary result. Informally it says that if the defining sequence sets of g and h are ordered with respect to each other, then every sequence that satisfies h also satisfies 9-  Lemma 5.6.  Suppose A'(flf) Q A (h) and t -< Sat(a, h). k  v  Then t -< Sat(cr,g).  Chapter 5. A Compositional theory for TL  97  Proof. (1)  38 G A* (h) 9 8 :  (2)  38' <E A'(flf) 3 8'QS  f  f  and t ^ Sat(8,h) Lemma4.3. Definition of Q  r  .  (3) t •< Sat(8',g)  Lemma 4.3.  (4)  Transitivity of (1) and (2).  Co-  (5) t -< Sat(a,g)  From (3) and (4) by Lemma 4.3.  • The intuition behind this is that if A (g) C p A (h), then any sequence that satisfies h will also t  k  satisfy g. Given this result, the rules of consequence are easy to prove. Theorem 5.7. Suppose g=^>h and A (g) t  C p A'(flfi) and A ' ( / i i ) C p A*(/i).  Then <?i=^S>/ii. Proof. Suppose a is a trajectory such that t -< Sat(a, gi). (1) t -< Sat(a,g)  Lemma 5.6.  (2) t •< Sat(a,h)  g=$>h.  (3) t -< Sat(a,hi)  Lemma 5.6.  (4)  Since a is arbitrary.  gf =^>/i 1  1  •  Example 5.4.  Using this theorem to rewrite one assertion into a semantically equivalent one can be illustrated by examining the result of Example 5.2: (~>[A] A --[5])=»(Next[C] A Next [D]). Since conjunction can be distributed over the next-time operator, as Next [C] A Next [D] = Next ([C] A[£>]), this can be rewritten as: (->[A\ A -.[£])=»Next ([C]A[D]).  98  Chapter 5. A Compositional Theory for TL  Example 5.5.  In the circuit of Figure 5.1, we can show using STE that ([B] A Next [B])==^Next (-. D) A Next (-./J). 1  2  J  Using Theorem 5.7, we can refine this to ([B] A Next [£])==§>Next (-.Z)). 1  5.2.6  Transitivity  The rule of transitivity is an analogue of the transitivity rule of logic: it gives the condition for deducing from c/i=g>/^i and g =^>h that gi=^>h . This condition is that A ' (g ) Qv A ^ g i ) ! ! 2  2  2  2  A*(/ii). Note that this is a weaker condition than showing that A * ^ ) Cp A*(/ii). Theorem 5.8.  Suppose =^>hi and g ==$>h and that A (g ) Cp ^(gi) U A (h ). l  gi  2  2  t  2  1  Then g =^>h . 1  2  Proof. Suppose a is a trajectory such that t -< Sat(a, gi). (1) t X Sat(a,hi)  5i=§>/*i  (2) t X Sat(a, giAhi)  Definition of 5af(cr, giAh ).  (3)  35 e A*(0i A/ii) 9 SQa  Lemma 4.3.  (4)  A * ^ ! A hi) = A (gi) II A ( / i )  By definition of A*.  (5)  35'e A * ( ^ ) 3 <y E 5  A ' G f e ) ^  (6)  S'Qa  (7)  t X Sat{5',g )  From (5) by Lemma 4.3.  (8)  t r< Sat(a,g )  From (6), (7) by monotonicity.  t  x  t  1  A^OHA*^).  Applying transitivity to (3) and (5). 2  2  Chapter 5. A Compositional Theory for TL  (9)  t -< Sat(a,h )  (10)  ^  g2=^h .  2  1  =  ^>/i  99  2  Since a was arbitrary.  2  • Example 5.6.  Using STE, we can prove the following about the circuit of Figure 5.1: • -.[BJ^^Next ^] 2  • Next [£]^>Next HF]) 2  3  Then, using Theorem 5.8, we can deduce ->[5]=^>Next (-i[F]). 3  5.2.7  Specialisation  Specialisation is one of the key inference rules. By using specialisation it is possible to generate a large number of specific results from one general result. With STE, it is often cheaper to prove a more general result than a more specialised result. Thus in some cases, it may be cheaper to generate a more general result than needed and then to specialise this general result than to use STE to obtain the result directly. Specialisation also promotes the re-use of results. It is often used together with transitivity: before applying transitivity to combine two assertions, one or both of the assertions arefirstspecialised. For example, a general proof of the correctness of an adder is straightforward to obtain using trajectory evaluation, even for large bit widths. Such a proof may show that if bit vectors representing the numbers a and b are given as inputs to the circuit, then a few time steps later the bit-vector representing a + b emerges as output. There are two reasons why one might want to specialise such a proof: • If the adder is part of a large circuit the actual inputs may be bit-vectors representing complex mathematical expressions. Since STE relies on representing bit-vectors with BDDs, if the BDDs needed to represent these mathematical expressions are very large, it may not  100  Chapter 5. A Compositional Theory for TL  be possible to use STE to prove that the adder works correctly for the particular inputs. The solution is to prove that the adder works correctly for the general case, and then to specialise the result appropriately. • The adder may be used a number of times in a computation, each with different input values. Instead of proving the correctness of the circuit for each set of inputs, the proof can be done once and then the specific results needed can be obtained by specialisation (and, probably, time-shifting). Recall the definition of the boolean subset of TL presented as Definition 4.15. Definition 5.1.  Define S, a subset of TL, by S ::=  t  | f  |  V  |  SAS | ->£  Definition 5.2.  1. £: V — y S is a substitution. 2. A substitution £: V — > S can be extended to map from TL to TL: • i{9i^92) = £(0i)Af(o ) 2  • •  £ H ? ) =  -  e(Nexto)  to)  = Next (((g))  • ((g Until h) = ((g) JJntil ((h) • Otherwise, if g is not a variable, ((g) = g If T is the assertion |= (] g=$>h > [ then £(T) is the assertion (= ^ £{g)=$>£(h) [>.  Chapter 5. A Compositional Theory for TL  101  Lemma 5.9 (Substitution Lemma).  Suppose |= ^ g=g>/i I and let £ be a substitution.  Proof. Let (f> by an arbitrary interpretation of variables and a be an arbitrary trajectory such that Sat{aA^{g))). (1)  Let 4'  = 4> o £  (2)  t ^ Sat(a^'{g))  Rewriting supposition.  (3)  4' is an interpretation of variables  By construction.  (4)  t •< Sat(a,<f/(h))  (5)  t =< Sat(a,<f>(((h)))  Rewriting (4).  (6)  h l ( ( f i ) ^ ) l  0 and a were arbitrary.  • Example 5.7.  Suppose that part of a circuit multiplies two 64-bit numbers together and then compares the result to some 128-bit number. Let c be the boolean expression that this part of the circuit computes — in general it will not be possible to represent c efficiently since the BDD needed to represent c will be extremely large. Now suppose that the next step in the circuit is to invert c. We may wish to prove that T= x  \=\[B) = c =$> Next ([£•] = -.c) \ is true.  Given that c is so large, it will not be possible to use STE directly to do this. But, let T= 2  \=/\[B] = a=^Hext{[D}  = ^a)\  where a is a variable (an element of V). Proving that T holds using STE is trivial. Having proved T , we can easily prove Ti using 2  2  102  Chapter 5. A Compositional Theory for TL  Lemma 5.9. Let when v = a  c  {v  otherwise  be a substitution. Note that Ti = £(T ). As T holds, and as £ is a substitution, by Lemma 5.9, 2  2  T i holds too. Although substitution is useful, in practice sometimes a more sophisticated transformation is also desirable. Lemma 5.10 shows that it is possible to perform a type of conditional substitution. A specialisation is a conjunction of conditional substitutions which allows us to perform different substitutions in different circumstances. An example of the use of specialisation is given in Chapter 7. Definition 5.3.  S = [(ei, £ i ) , . . • , ( e , £„)] where each £ is a substitution and each e,- 6 S, is a specialisation. n  If 0 € TL, then -(</) = A? (e,=1  &(</)).  Lemma 5.10 (Guard lemma).  Suppose e e S and (= Then |= <] (e  5  (]  g=g>/*  r)==£>(e  /J)  Proof. Suppose t •< Sat(a, e =>- g). By the definition of the satisfaction relation, either: (i) t -< Sat(a, -ie). In this case, by the definition of the satisfaction relation, t •< Sat(a, e =>• /i). (ii) t ^ 5af(cr,  In this case, by assumption Sat(a, h). Thus, by definition of the satisfaction  relation, t -< Sat(cr, e =>- h). As a was arbitrary the result follows.  •  103  Chapter 5. A Compositional Theory for TL  Theorem 5.11 (Specialisation Theorem).  Let H = [ ( e i , £ i ) , . . . , (e , £„)] be a specialisation, and suppose that |= ^ g=^>h \. n  (1) For* = l , . . . ,n,\= Proof.  By Lemma 5.9.  Ui(9)=3>ti(W  By Lemma 5.10.  (2) (3)  |= 4 A? (e,-  (4)  |= ^  A" (e; =>•  =1  } Repeated application of Theorem 5.4.  =1  )  By definition.  • 5.2.8  Until Rule  Theorem 5.12. Suppose ^r =g>/i and g =^>h . Then gi Until c/ =^>/ii Until h . 1  1  2  2  Proof. Let cr = s sis ... (1) 3*9 0  2  2  be a trajectory such that t -< Sat(a,gi Untilg ).  2  2  t -< AJ~o Sat(cr, Next^) A Sat(a, Next'^2) By definition of Sat. (2)  t ^ Sar(cr, Next'# ) and 2  Definition of conjunction.  t ^ Sat(ja, Next ^i), j = 0,... , i — 1 J  (3)  •  t X Sar(cr, Next 7i ) and s  2  Assumptions and Theorem 5.3.  t ^ 5af(cr, Next-'/ii), j = 0,... , i - 1 (4)  t -< A p o Sat(a, Next^j) Ataf(cr,Next*/» )  Definition of Sat.  (5)  t -<Sat(a hi Until /i )  Definition of Sat.  (6)  01 Until c/ =g>/ii Until h  2  :  2  2  2  Since a was arbitrary.  Corollary 5.13. Suppose g=^>h: then Existsg=^>Exists/j and Globalg=^Global h.  Chapter 5. A Compositional Theory for T L  104  Proof. The first result follows directly from the theorem using the definition of the E x i s t s operator and the fact that t = » t . Let a be an arbitrary trajectory such that t •< Sat(a, Global g). (1)  f •< Sat(a, E x i s t s (->g)) Expanding the definition of G l o b a l .  (2)  Wi,f •< Sat(a>i,->g)  Definition of Sat for E x i s t s .  (3)  V i , t X Sat(a>i,g)  Definition of Sat for ->, Lemma 3.1.  (4)  V i . t X Sat(a>i,h)  g=£>h  (5)  V i , f ^ Sat(a>i,->h)  Definition of Sat for - i .  (6)  f X Sat(a, Exists (->h)) Definition of Sat for Exists .  (7)  t X Sat(cr, Global g)  Definition of Sat for G l o b a l .  Which concludes the proof since a was arbitrary.  •  Example 5.8. Consider again the circuit in Figure 5.1. Using STE, it is easy to prove [£]==^Next (->[£)]). Using Corollary 5.13, we can deduce that Global [B]=g>Global (Next (->[D])).  5.3 Compositional Rules for T L  n  For the realisable fragment of TL„, the compositional theory above applies to the =t> relation as well as the =g> relation. A key result used is Lemma 4.8. The remainder of the section assumes that we are dealing solely with the realisable fragment of TL„. Only the statements of theorems are given as the proofs are very similar to or use the proofs in the previous section so the proofs are deferred to Section A.3. Table 5.2 summarises the rules.  105  Chapter 5. A Compositional Theory for T L  Theorem 5.14 (Identity). For allg G TL„, g=Og. Theorem 5.15 (Time-shift). Suppose g=oh.  Then Wt > 0, Next'g=> Next  Theorem 5.16 (Conjunction). Suppose gi=t>h and g =t>/i . Then gi A g =$>h\ A h . 1  2  2  2  2  Theorem 5.17 (Disjunction). Suppose gi=i>hi and g =o>h . Then gi V g =S>hi V h . 2  2  2  2  Theorem 5.18 (Consequence). Suppose g = o / i and A (g) Cp A (g ) and A*(/ii) Cp A (/i). Then ^ =o/e-i. t  t  fc  1  Theorem 5.19 (Transitivity). Suppose c/i=o/ii and g =>h 2  and that A (g ) Cp A (g ) II A ( / i i ) . t  2  t  2  t  1  Then g =t>h . Theorem 5.20 (Specialisation). 1  2  Let EE = [(ei, £ i ) , . . • , (e„, £ ) ] be specialisation, and suppose that |= t\ g=C>h |). Then \= n  tE(g)=>~(h)). Theorem 5.21 (Until). Suppose g =£>hi and g =$>h . Then gi U n t i l g =[>/ii U n t i l h . 1=  2  2  2  2  Other rules, like Corollary 5.13 are possible too. To illustrate this, and because the result is useful, afiniteversion of Corollary 5.13 follows. Lemma 5.22. If g=t>h, then for all t, then Global [(0, t)] c / = ^ G l o b a l [(0, t)] h.  Chapter 5. A Compositional Theory for T L  106  Proof. (By induction ont) (1)  Global [(0, 0)] 0 ^ > G l o b a l [(0, 0)] h  (2)  Assume as induction hypothesis:  By hypothesis.  Global [(0, t - 1)] 0=^>Global [(0, t - 1)] h (3)  Next <7=ONext*/i  Time shift of hypothesis.  <  (4) Global [(0, t)] #=f>Global [(0, t)] h This concludes the induction.  Conjunction of (2) and (3) •  Corollary 5.23. If g=f>h, then for all s,t,t  > s, Global [(s, t)] g = 0 Global [(s, t)] h.  Proof. (1)  Global [(0, t - s)] fl(=>Global[(0,< - s)] h  Lemma 5.22  (2)  Global [($, t)] 0=OGlobal[(s,i)] h  Time-shift (1).  •  5.4  Practical Considerations  5.4.1  Determining the Ordering Relation: is A (g) Q-p t  A*(/i)?  To apply the rules of consequence and transitivity, it is necessary to answer questions such as A*(<7)  A (h)l One way of testing this is to compute the sets and perform the comparison t  directly. However, for practical reasons we often wish to avoid the computation of the sets, and to use syntactic and other semantic information to determine the set ordering. Typically formulas like g and h share common sub-formulas and even some structure, which makes the tests explored in this section practical. Lemma 5.24 is the starting point of these tests, and although very simple, it is important in  Chapter 5. A Compositional Theory for TL  Rule  Name Identity  Side condition  g=>g  Time-shift  g=$>h Next*c/=>Next /i  Conjunction  g =$>h g =$>h giAg =t>h Ah l  1  2  2  l  g =C>h 1  1  2  g =C>h 2  2  9\ V g =>h 2  Vh  1  2  g=>h  Consequence  A\g)  g =S>h 1  Transitivity  t >0  <  2  Disjunction  107  V  l  1  A (fc) fc  l  g =t>h g =$>h gi=t>h 1  A (g ), A (hi) Cp t  \Z  1  2  A\g )Q  2  2  A ' ^ O n A  v  4  ^ )  2  Specialisation Until  E a specialisation.  \=tZ(g)=4>Z{h)) g =^>h g =oh gi Untilg =t>hi U n t i l h 1  1  2  2  2  2  Table 5.2: Summary of TL„ Inference Rules practice. One effect of this lemma is that if two formulas are syntactically different but semantically equivalent, then they are interchangeable in formulas. Lemma 5.24. If g and h are simple then the question whether A (g) Cp A (h) is whether for V(s, q) G Dh t  with q = t or q — T, 3(s', q) G D with s ' C s . g  k  .  Proof. This is a restatement of the definition of A*.  • Given this starting point, the ordering relation can be determined by examining the structure of formulas and applying the following lemmas.  108  Chapter 5. A Compositional Theory for TL  Lemma 5.25. If g = V g , then A\g) Q A * ^ ) . g i  2  P  Proof. (1)  Let 8 G A* (01)  (2)  A'(flf) = A (g ) U A (g )  By definition of A*  (3)  6 e A\g)  Set theory  t  t  i  2  (4) 8Q8 (5)  Reflexivity of partial order  A*(o) C p A (0i)  •  Definition of C p  t  Corollary 5.26. For e G cT, A ( e ^ o) C p A (0). k  k  Proo/! Straight from the definition of implication.  •  Lemma 5.27. If g = g\ Ag , then A*(01) C p A'fa). 2  Proof. (1)  Let5GA (0) t  G A'(oi), <J G A (0 ) 9 <J =  (2) (3)  2  fc  2  8^8  U8  2  Definition of A . k  Definition of join.  (4) A (0 ) C p A'(flf) t  1  Definition of C p  • Lemma 5.28. Suppose A (g) C p A (h): then Vi > 0, A^Next^) C p A (Hext h). k  l  t  i  Proof. By induction on i. The base case of i = 0 follows directly from the assumption. Suppose A'tNext'flr) C p A^Next'Ti).  Chapter 5. A Compositional Theory for TL  Then shift{A (llGxt g)) t  109  Qp shiftA*(Next*h)) .  i  1  Thus A ^ N e x t ^ ^ ) Q A*(Next( )fc). 1  •  i+1  v  Lemma 5.29. Suppose A'(flri) Qp A*(/ii) and A (0 ) Qv A*(fc ). k  2  Then A * ^ Untilg ) 2  QP  A (/ii k  2  U n t i l h ). 2  Proof. (1) \/i> 0, A*(Next'sfi) QP A (Next*'Jii)  By Lemma 5.28  (2) V« > 0, A ^ N e x t ^ ) Qv A^Next*'^)  By Lemma 5.28  fc  (3)  A*(oi U n t i l e ) = U ^ ( A ( N e x t ° 0 ) II . . . II A ^ N e x t ^ " ^ ) II A (Next*o )) t  1  o  4  1  2  By definition (4)  Q  U°l (A'(Next%) n . . . H A^Next**'- )/*!) II A*(Next*/i )) 1  v  0  2  From(l) and (2) (5)  = A*(^i U n t i l h ) 2  By definition  • Lemma 5.30. For all i, A*(Next 0) Qp A*(Globalg). 8  Proof. Let 8 £ A^Globalflf). (1)  t < Sat(8, Global g)  Lemma A.6  (2)  = ->Sat(5, Exists-i^r)  Definition of Global  (3)  = ->Sat(8, t U n t i l ->g)  Definition of Exists  (4)  = -i V ^ (Sat(8>i,->Next g)) Definition of satisfaction  (5)  = -i V-^ -i(5af(5>i,Next 0))  Definition of satisfaction  (6)  = A£ Saf(£>;,Next 0)  De Morgan's law  l  0  J  0  J  o  (7) V i , t -< Sat(8>i,Next g)  Definition of conjunction  l  1  Recall that shift(sos s2 • • •) = 1  XsoSis  2  •• •  110  Chapter 5. A Compositional Theory for TL  36' e A (Next'y) BS'QS  Lemma 4.3  4  (8)  Vi,  (9)  V i , A^Next'flf) C p A^Globalg)  Definition of C p  • Similar rules can be developed for A ; the two sets of rules are tied together by the definition f  of the satisfaction of negation. These tests seem very simple and obvious, but in practice they allow the development of efficient algorithms to test whether A (g) C p t  5.4.2  Restriction to T L  A (h). t  n  The restrictions on the logic TL„ make it much easier reason about. Recall that the basis of the logic is the set of predicates G • In practice, many T L formulas are of the form n  n  r  .  si  A Next ( A 2  i=0  j=0  [riij] = e ) tJ  '  '  where the e;j 6 £• Given  flr = Ar: Nexf(A;Lo[ny = <,) o  h = A^ Next (A^[n ] = e ) , fc  0  M  w  from Lemma 5.27 it follows that to determine whether A (g) C p A ( A ) , we need to check t  t  whether Vi, j ; 3/ 9 n' =ra,-,/A e\ = e, . itj  3  v  This can largely be done syntactically. Depending on the representation used, testing whether  e i — e\j may either be done syntactically or using other semantic information. Particularly it  when the level of abstraction is raised, it is often the case that other semantic information must be used. Of course, there are important cases where formulas are not of this form, and we need to have other ways of reasoning about them. A more general and typical case is verifying an assertion of the form \ g=S>hty,where g is an arbitrary TL„ formulas and h = Next ( A f J  = 0  ( [ « i ] = &i) )•  111  Chapter 5. A Compositional Theory for TL  Definition 5.4. Strict dependence: Informally, g £ TL„ is strictly dependent on the state components fl = { r i , . . . , r;} at time t if g being true at time t implies that the components n , . . . , rj have defined values, and g is not dependent on any other state components. The formal condition for strict dependence is: g is strictly dependent on the state componentsflif V£ £ A\g) : Vr £ fl, U C «J [ ]; Vs g fl, <$,[s] = U . (  r  In practice, strict dependence can often be checked syntactically. For example [B] = e where e £ £ is strictly dependent on B. This comes from the property of exclusive-or — if a © b £ B, where exclusive-or, a © b, is defined as aA(-<b) V (->a) A 6 — then a, 6 £ B). Moreover, strict dependence can be checked relatively efficiently (as will be seen later). Theorem 5.31 (Generalised Transitivity). Let Ai be a trajectory formula such that r = T  AI  £ 7£T(^)> and let hi = Next*/i be a TL„  formula strictly dependent on state components {r ,... x  poral operators. Let A = Next*(Aj 2  Suppose |= (\ Ai=>hi  =1  , n } at time t where h contains no tem-  [rj] = VJ) where the Vj £ V.  > | and |= ^ A =>h 2  |>. Then,  2  (1)  There is a substitution £ such that |= t\  (2)  h(T *) = t.  Ai=^>£(h )and 2  A  t  Proof. (1)  For i = 1,... , /, 3e{ £ £ 3 e,- = r^ [r ]  (2)  Let C = Next* A $  1  t  = 1  |= ^ A I = I > / H ^, strict dependence of hi.  [r,-] = e;  (3)  By construction of C.  (4)  Conjunction.  (5)  For v £ {vi,... , vi} let £(UJ) = ej For u ^ {vi,... , vi} let £(u) = u  (6)  |= (| A =t>/i > [ 2  2  Given.  Chapter 5.A Compositional Theory for T L  112  Substitution (Lemma A. 18).  (7) (8)  h^iAe(A )=>^2)^  Rule of consequence.  (9)  C = t(A )  By construction.  2  2  From (4), (8) by transitivity.  (10) This concludes the proof of (1) (11)  \= (| Ai =>Next'/i > [ and T  M  'h\T )=t Al  t  G  n (t) T  This concludes the proof of (2)  •  Although the proof of this theorem is relatively complex, the theorem itself is not, and very importantly many important side-conditions can be checked automatically. The seeming complexity of the theorem comes from having to relate Ai to A . But, this turns out to be the virtue 2  of the theorem. The difficulty with trying to use transitivity between two results |= t\ Ax => hi \ and |= \ A =c>h ) 2  2  is to find the appropriate specialisation for the latter result. This theorem  provides a method for doing this: the first part of the theorem says that a specialisation exists, and the second part helps find it. The example below illustrates the use of the theorem.  Example 5.9. Figure 5.2 shows two cascaded carry-save adders (CSAs). There are four inputs to the entire circuit, and two outputs. Three of the inputs get fed into one of the CSAs; the other C S A gets the fourth of the inputs and the two outputs of the first C S A . Assuming each C S A takes one unit of time to compute its results, if four values get entered at J, K, L and Af, two units of time later, the sum of these four values will be the same as the sum on nodes P and Q.  Ai  = ([J] = j) A([K] = k) A{[L] = I) A(Next [Af] = m)  hi  = Next ([N] + [0] = j + k + I A [M] — m)  A h  = Next ([Af] = m) A([N] = n) A([0] = o) = Next ([P] + [Q] = m + n + o)  2  2  2  113  Chapter 5. A Compositional Theory for TL  Then |= 4 A i = r > / i i [>  |= 4 A =oh }. 2  2  These two results can be proved using trajectory evaluation. The process of performing tra-  6 TZr{d(hi)).  jectory evaluation also checks that T  AI  A  2  and hi are of the correct form for  Theorem 5.31. Furthermore the strict dependence of hi on components M, N and O can be checked syntactically. By the theorem we have that there is a specialisation £ such that  \=$A =>Kext ([P] 2  l  + [Q] = Z(m + n + o))) _  and ([AT] + [O] = j + fc + I A [M] = m)(T ) AI  = t  which means that  (^[N] + ^[0]=j  + k+  l)=t.  But, by the structure of hi we know that  r  M 2  [N] = f (n) and  [0] = £(o) and  [M] = £ ( m )  and so, as by the properties of substitution £(x + y) = £ ( x ) + £(y), (£(n + o) = j +fc+ / A £(m) = m) = t. This result has given us sufficient information about £. Thus, | = (J A i = T > N e x t ( [ P ] + [Q] = j + fc + / + m ) }. 2  5.5 Summary This chapter presented a compositional theory for TL; this theory is very important in overcoming the computational bottlenecks of automatic model checking. The focus of the compositional theory is property composition, which is particularly suitable for STE-based model checking. The general compositional theory for TL was presented, followed by additional rules for T L . n  114  Chapter 5. A Compositional Theory for TL  ML  K J  o  N  Q P  Figure 5.2: Two Cascaded Carry-Save Adders  Section 5.4 discussed some issues that are important in the practical implementation of the theory. Chapter 6 shows how the compositional theory can be implemented in a practical tool.  Chapter 6 Developing a Practical Tool  This chapter discusses how to put the ideas presented in the previous chapters into practice. A number of prototype verification systems using these ideas have been implemented to test how effectively the verification methodology can be used. Although these prototypes have been used to verify substantial circuits, they are prototypes, and the purpose of the chapter is to show how a practical verification system using TL can be developed, rather than to describe a particular system. Section 6.1 discusses the Voss system, developed to support the restricted form of trajectory evaluation. Voss is important because the algorithms that it implements form the core of the prototype verification systems. This section also discusses the important issues of how boolean expressions, sets of interpretations, and sets of states are represented. Section 6.2 examines higher-level representational issues, in particular efficient ways of representing TL formulas so that they can be efficiently stored and manipulated. It is important that appropriate representational schemes be used since different methods are appropriate at different stages. By converting (automatically) from one scheme to another, the strengths of the different methods can be combined. Section 6.3 shows how trajectory evaluation and theorem proving can be combined into one, integrated system. The motivation for this is to provide the user with a tool which provides the appropriate proof methods at the right level of abstraction — model checking at the low level, theorem proving at the high level where human insight is most productively used. The theorem prover component is the implementation of the compositional theory, which is critical  115  Chapter 6. Developing a Practical Tool  116  for the practicality of the approach. One of the key issues here is how to provide as much assistance to the human verifier as possible. The final section, Section 6.4 extends existing trajectory evaluation algorithms so they can be used to support a richer logic. 6.1  The Voss System  In order to verify realistic systems, any theory of verification needs a good tool to support it. Seger developed the Voss system [115] as a formal verification system (primarily for hardware verification) that uses symbolic trajectory evaluation extensively. There are two core parts of Voss. The user interface to Voss is through the functional language FL, a lazy, strongly typed language, which can be considered a dialect of M L [107]. One of the key features of FL is that BDDs are built into the language, as boolean objects are by default represented by BDDs. Since BDDs are an efficient method of representing boolean functions, data structures based on BDDs, such as bit vectors representing integers, are conveniently and efficiently manipulated. Of course, as previously discussed, the limitations of BDDs mean that there are limitations on what can be represented and manipulated efficiently; how these limitations are overcome is an important topic of this chapter. The second component of Voss is a symbolic simulation engine with comprehensive delay modelling capabilities. This simulation engine, which is invoked by an FL command, provides the underlying trajectory evaluation mechanism for trajectory formulas. Trajectory formulas are converted into an internal representations (the 'quintuple lists') and passed to the simulation engine; these quintuple lists essentially are representations of the defining sequences of the antecedent and consequent formulas comprising the assertions. The antecedent formula is used to initialise the circuit model, and the simulation engine then computes the defining trajectory for the antecedent. As this evaluation proceeds, Voss will flag any antecedent failures, viz. a Z appearing in the antecedent, and compares the defining trajectory  Chapter 6. Developing a Practical Tool  117  with the defining sequence of the consequent. Two types of errors are reported if the comparison fails: a weak consequent failure occurs if Us appearing in the defining trajectory are the cause of the failure ; a strong consequent failure is reported if the defining trajectory of the antecedent 1  is not commensurable with the defining sequence of the consequent. Using the terminology of 2  TL and Q, a weak failure corresponds to the satisfaction relation returning J_, a strong failure corresponds to a f . Circuit models can be described in a number of formats. Interacting through FL, the user sees models as abstract data types (ADTs) of type fsm. FL provides a library (called the EXE library) which allows the user to construct gate level descriptions of circuits. Once a circuit is constructed as an EXE object, the model can be converted into an fsm model. There are also tools provided for converting other formats (both gate level and switch level models) into fsm objects; among others, VHDL and SILOS circuit descriptions can be accepted. Representing Sets of Interpretations Since STE-based verification computes the sets of interpretations of variables for which a given relation holds, efficient methods for representing and manipulating these sets is important. Voss represents a set of interpretations by a boolean expression (i.e., by a BDD). If ip is a boolean expression, then ip represents the set {<$> £ $ : </>(</?) = t}. This representation relies on the power of BDDs, so although usually a good method, it breaks down sometimes. One advantage of this representation is that set manipulation can easily be accomplished as boolean operations. If ipi represents the set of mappings $1 and tp the set of mappings $ , then ipy V ip is the 2  2  representation of $! U $ , ^1 A (/?2 the representation of $1 n $ , 2  2  2  is the representation of  $ \ $ i , and set containment can be tested by computing for logical implication. This happens if the denning trajectory of the antecedent is less than the defining sequence of the consequent; the verification might succeed with a stronger antecedent. A stronger antecedent will only make things worse. 1  2  Chapter 6. Developing a Practical Tool  118  A more general point about representing interpretations also needs to be made. Suppose that 6 G £ is a boolean expression (and so represented as a BDD). Let u . . . , v be the variables 1 ?  m  appearing in b? To ask whether there is an interpretation 0 such that 0(6) = t is the same as asking whether 3ui,... ,v .b (are there boolean values that can replace the variables so that the m  expression evaluates to true?). Since existential quantification is a standard BDD operation, this can be computed in FL through the construction and manipulation of BDDs. Representing Sets of States Voss manipulates and analyses circuit models, viz. models where the state space is naturally represented by C for some n. A state in C is thus a vector ( c i , . . . , c ), where each c,- G C. Voss n  n  n  uses a variant of the dual-rail encoding system discussed in Section 3.2 for representing elements of C \ which means that each state in Voss is represented by a vert^  . . . , (a ,6 )), n  n  where the a;, 6; G B. The use of boolean variables allows one symbolic state to represent a large number of states. The vector s = ((ai,&i),... ,(a ,6 )) n  n  (where the a,-, 6,- G £) represents the set of states {0(s) : <f> G $} where = ((0(a ), 0(60),... , ( # a ) , # M ) > 1  B  Note that the a and 6, need not contain any variables. This idea can be extended so that sets of 8  sequences can be represented by symbolic sequences. This type of representation is known as a parametric representation. The alternative representation is the characteristic representation. (For discussion of these representations, see [43, This is a simple syntactic test; since quantification does not exist in £, we do not have to ask whether variables are free or bound. 3  119  Chapter 6. Developing a Practical Tool  87].) x '• C  & represents the set  n  1>:x(*)=t}. N o t e the s i m i l a r i t y to the w a y i n w h i c h interpretations are represented. T h i s indicates that x c a n be represented as a B D D — the m e c h a n i c s o f this are presented n o w . L e t s = ( s i , . . . , s ) b e n  a s y m b o l i c state representing the set o f states S, a n d let v  u  i n s. L e t r = ( r in{ui,...  T h u s x( ) r  l 5  . . . ,v  m  b e the v a r i a b l e s a p p e a r i n g  . . . , r ), w h e r e each r; i s a p a i r o f b o o l e a n v a r i a b l e s ( r i , r n  8 ]  i ) 2  ) not a p p e a r i n g  ,v }. m  c  a  nD  e  represented a b o o l e a n e x p r e s s i o n ( B D D ) c o n t a i n i n g the r ; j v a r i a b l e s o n l y .  T o determine whether a p a r t i c u l a r state i s i n the set, the r ; are instantiated i n x( )j the v a l u e r  o b t a i n e d is t i f a n d o n l y i f the state i s the set. T h e advantage o f the characteristic representation i s that i t is c o n v e n i e n t to p e r f o r m u n i o n and i n t e r s e c t i o n operations o n sets o f states. M o r e o v e r , as e a c h set is represented b y one B D D , set representations are c a n o n i c a l , w h i c h i s e x t r e m e l y useful. H o w e v e r , this m o n o l i t h i c representation o f sets o f states c a n b e v e r y e x p e n s i v e . T h e p r i m a r y advantage o f the parametric representation i s that i t i s v e r y c o m p a c t , n i n d e pendent B D D s represent a set o f states, w h i c h increases the size o f the state space that c a n b e m a n i p u l a t e d . M o r e o v e r , this representation is p a r t i c u l a r l y suitable for the s y m b o l i c s i m u l a t i o n o f the state space, a n d for the c o m p u t a t i o n o f d e f i n i n g sequences. It is the representation m e t h o d o n w h i c h S T E is based.  Chapter 6. Developing a Practical Tool  120  6.2 Data Representation Although exploiting the power of BDDs to implement the underlying trajectory evaluation efficiently is essential, there needs to be complementary ways of representing and manipulating data. One way of doing this is to represent TL formulas and associated data structures symbolically. This is best explained by the following example. Suppose the circuit being verified is an integer adder. Formally, the circuit model represents the integers as bit vectors of appropriate size, and addition of integers is formally represented as bit vector manipulation. The TL formulas used to specify correctness will formally describe the behaviour of the circuit at the bit level. In our prototype tools, integer types like this are represented and manipulated in the following way. • An abstract data type representing integers is declared. See Figure 6.1 which gives an example declaration: integers are constants, variables, or the addition, subtraction, multiplication, division, or exponentiation of two integers; integer predicates are the comparison of two integers. • A routine which converts integer objects into bit vectors, and integer predicates into an equivalent predicate over bit vectors is written. For convenience, this routine is referred to as bv. Typically, the bit vectors are finite and can be represented in standard ways (e.g. twos complement). However, it is also possible to have representations of infinite bit vectors (the lazy semantics of FL is useful here). • A set of bit vector operations giving the formal semantics of the objects is implemented. Addition, for example, is modelled by operations on two bit vectors. • A set of ADT operations corresponding to the vector operations is implemented. This means that the FL program can manipulate integer-related objects without converting them  Chapter 6. Developing a Practical Tool  121  into the associated bit vectors. lettype N = // N a t u r a l number e x p r e s s i o n s Nvar s t r i n g | Nconst i n t j Nadd N N j Nsub N N j Nmult N N j Ndiv N N | Npow N N  Figure 6.1: An FL Data Type Representing Integers Although the formal semantics of integer objects is given by bit vectors and the operations on bit vectors, the higher-level representation is useful for two reasons. First, it has the effect of raising the level of abstraction, which makes the verification task for the user easier since it enables the user to deal with higher-level, composite objects. Second, and more importantly, it has significant performance advantages; BDDs can be used where appropriate and other methods where BDDs fail. This situation is depicted in Figure 6.2. The FL program stores the object d; by applying the conversion routine, bv, the bit vector which represents d can be computed. Applying the operation f  adt  to d is the same as applying the operation fud to d^d, which is illustrated by the  commutative diagram in Figure 6.2. d  M  d!  bv  dbdd  bv  d'  hdd  Figure 6.2: Data Representation  Thus, even if d or d! cannot be represented efficiently as BDDs, there is an effective way  Chapter 6. Developing a Practical Tool  122  of representing and manipulating them through the abstract data type representation. As will be seen, using this method of representation is an effective way of going beyond the limits of BDDs. It is critical that both the conversion routine bv and the ADT operations are implemented correctly so that the diagram in Figure 6.2 is commutative. In the HOL-Voss system correctness was formally proved [90]. Although I did not go through this exercise in the prototype implementations, this is a critical step in the production of a tool. However, one should note that there may be a trade-off between degree of rigour and performance. For example, in an interesting paper showing how BDDs can be implemented as a HOL derived rule [75], Harrison reports that a HOL implementation of BDDs as being fifty times slower than a Standard M L implementation. Although this work is cited as being 'superior to any existing tautology-checkers implemented in HOL', Harrison points out that other approaches to ensuring correctness can be adopted. The ADT routines that implement the operations on the data objects constitute domain knowledge, representing the verification system's higher level semantic knowledge of what bit-level operations mean. There are different ways in which domain knowledge can be provided. One method is to have a canonical representation for data objects, or to have a set of decision procedures for the type (for example, to tell whether two syntactically different objects have the same semantics — whether if they are both converted into BDD structures, the structures will be the same). There is a limit to how far this can go; for example, with the integer representations used, no canonical forms exist, and decision procedures have limitations and can be expensive. However, as will be seen this can be effective and, since it is automatically implemented, userfriendly, reducing the load of the human verifier. Another method — which can be implemented as an alternative or as a complement to the decision procedure method — is to provide an interface to an external source of knowledge. One  Chapter 6. Developing a Practical Tool  123  likely such source is a trusted theorem prover such as HOL, allowing the verifier to prove results in HOL, and then to import these results into the verification system. Although approach is very flexible and very powerful, it increases the level of expertise needed by the verifier considerably. 6.3  Combining STE and Theorem Proving  The practical importance of the compositional theory presented in Chapter 5 is that it provides a powerful way of combining STE and theorem proving. The inference rules of the compositional theory are implemented as proof rules of a theorem prover. The combination of theorem proving and STE creates a tool which provides the appropriate proof mechanism at the appropriate level. For a human to reason at the individual gate level, while conceptually simple and straightforward, is often too onerous and tedious. A single trajectory evaluation can often deal with the behaviour of hundreds or thousands of gates, depending on the application. The theorem prover allows the human verifier to use insight into the problem to combine lower-level results using the compositional theory. By using the representation method discussed above, and the compositional theory, the computational bottle-neck of automatic model checking algorithms can be widened considerably. The prototype verification systems built implement proof systems based on STE and the compositional theory for STE. The object of verification is to prove properties of the form |= ^ g=C>h The proof system does this by using STE as a primitive rule for proving assertions; the compositional theory is implemented as set of proof rules that can be used to infer other results. From a practical point of view, the Voss system provides a good basis for this. The user interacts with the proof system using FL. By using the appropriate FL library routines, trajectory evaluation and the compositional theory can be used. There are different ways in which this could be done and packaged.  Chapter 6. Developing a Practical Tool  124  For example, in the first prototype tool, the verification library consisted of the following rules, implemented as FL functions, each function either invoking trajectory evaluation or one of the compositional inference rules. • VOSS:  This performs trajectory evaluation.  • IDENTITY  implements the identity rule.  • CONJUNCT • SHIFT  implements conjunction.  allows assertions to be time-shifted.  • PRESTRONG  implements part of the rule of consequence, allowing the antecedent of an  assertion to be strengthened. POSTWEAK  implements the other part of the rule of consequence, allowing the conse-  quent of an assertion to be weakened. Both of these rules use domain knowledge to check the correctness of the use of the rule. • TRANS  takes two assertions, checks whether transitivity can be applied between the first  and second (i.e., the correct relationship holds between the two assertions), and if it can be, applies the rule of transitivity. • SPECIAL • SPTRANS  allows the user to specialise an assertion. takes two assertions, 7\ and T , and attempts to find a specialisation 2  H  such  that transitivity can be applied between Ti and H(T ). The heuristic used to find the spe2  cialisation (discussed later) does not compromise the safety of the verification since if it fails and no specialisation is found, no result is deduced. Moreover, if a putative specialisation is found, the correctness of the specialisation is checked by testing whether the conditions for transitivity to apply do hold once the specialisation is applied. • AUTOTIME  takes two assertions, Xi and T , and attempts to find an appropriate time-shift 2  t for one of the assertions so that transitivity can be applied. Recall that the time-shift rule  Chapter 6. Developing a Practical Tool  125  only applies if t > 0. If t < 0 is found, then the verification system shifts T\ forward by —t time steps and attempts to apply transitivity between the shifted Ti and T ; if t > 0, 2  then T is shifted forward by t time units, and then the verification system attempts to 2  apply transitivity between Ti and the shifted T . 2  • ALIGNS UB  combines the ideas of the above two rules. Given two assertions it attempts  to find a time-shift and specialisation so that when both are applied, transitivity can be used. • PRETEND  allows a desired result to be assumed without proof. When deciding whether  an overall proof structure is correct, it may be useful to assume some of the sub-results and then see whether combining the sub-results will obtain the overall goal, before putting effort into proving the sub-results. Furthermore, in a long proof built up over some period of time it may be desirable at different stages in proof development to replace some calls of V O S S with  PRETEND.  Having proved a property of the circuit using STE it may  take too much time in proof development to always perform all STE verifications when the proof script is run. Although at the end, the entire verification script should be run completely, it is not necessary to always perform all trajectory evaluations in proof development. An important part of implementing this verification methodology was to integrate the trajectory evaluation and theorem proving aspects into one tool. Not only does this make the methodology easier for the user (since the quirks of only one system have to be learned and only one conceptual framework and set of notations have to be learned), the practical soundness of the system is maintained (the user does not have to translate from one formalism to another). Moreover, using F L as the interface is very beneficial. Although this requires the user to be familiar with FL, once learned it provides a flexible and powerful proof tool. Using the basic verification library provided by the tool, the user can package the routines in different ways.  Chapter 6. Developing a Practical Tool  126  The proof is written as an FL program that invokes the proof rules appropriately. This allows the proof to be built up in parts and combined. The use of a fully programmable proof script language — FL — removes much drudgery and tedium. A critical factor in trajectory evaluation, affecting both the performance and the automatic nature of STE, is the choice of the BDD variable orderings used in the trajectory evaluation. A poor choice of variable ordering can make trajectory evaluation impossible or slow [44]. A l though the use of dynamic variable ordering techniques (one of which has been implemented in Voss) ameliorates the situation, the compositional method means that dynamic variable ordering is not a panacea. In many cases, there is simply not one variable ordering that can be used. The strength of the compositional theory is that it allows different variable orderings to be used for different trajectory evaluations. If different variable orderings must be used for each of many trajectory evaluations (for some proofs hundreds of trajectory evaluations could be done), using dynamic variable ordering alone might significantly degrade performance. On other hand, in many applications, good heuristics exist for choosing variable ordering automatically based on the structure of the TL formulas. One of the advantages of representing data at a high level (an integer ADT) is that knowledge of the type and operations on the type can be used to determine appropriate variable orderings. A useful technique is to provide as part of the FL library implementing a particular ADT, a function which takes an expression of the type and produces a 'good' variable ordering. This particular example illustrates the advantages of incorporating heuristics into a system to aid the user. Other examples of heuristics which proved to be useful are the heuristics which takes two assertions and try tofindappropriate specialisations and time-shifts so that transitivity can be used between the two assertions. The algorithms that implement these heuristics are straightforward. Although there are a number of possible heuristics and algorithms that could be used, experience showed that simple implementations are quite flexible.  Chapter 6. Developing a Practical Tool  127  Finding a time-shift: This algorithm takes the consequent of one assertion and the antecedent of another and determines whether if one of the formulas is time-shifted, the two formulas are related to each other (in that their defining sequences are ordered by the information ordering). String matching is the core of the algorithm, and although the extremely large 'alphabet' restricts the sophistication of the string matching algorithms that can be applied, in practice the simple structure of the formulas means that simple string matching algorithms are quite adequate. Finding a specialisation: This heuristic performs a restricted unification between two formulas to discover whether if one of the formulas is specialised, it is implied by the other (in that the defining sequence of one is ordered with respect to that of the specialised formula). A similar approach is used in implementing Generalised Transitivity (Theorem 5.31). Since semantic information must be used as well as syntactic (two syntactically different expressions may be semantically equivalent), the effectiveness of the algorithm is limited by the power of the domain knowledge incorporated into the tool. However, the simple structure of most antecedents means that a simple heuristic works well. It is also possible to incorporate both heuristics into one heuristic so that candidate timeshifts and specialisations are sought at the same time. To implement this completely is much more difficult since there may be a number of different time-shift and specialisation combinations that can be applied. It can also be computationally more expensive, since for each possible time-shift it may be necessary to use different domain knowledge. However, in practice, since formulas tend not be very large, this can be useful and practical. Here, the representation of data at the ADT level is very important practically since high-level information can be used to find whether time-shifts and specialisations will be appropriate; if a lower level representation were used, much more work would need to be done. In all cases, once a transformation is found, it is automatically applied and checked; this also  Chapter 6. Developing a Practical Tool  128  shows that heuristics can be incorporated without compromising the soundness of the proof system. The core inference rules are always used for deducing results; the heuristics provided by the user or the verification system as FL functions are there to automate the proof (at least partially) in determining how the inference rules are to be used, and at no stage is the safety of the result compromised. Moreover, if a transformation cannot be found, suitable error messages can be printed indicating why such a transformation could not be found, helping the user to determine whether the attempted use of the rule was wrong (e.g. because the desired result is false), whether more information is needed (e.g. perhaps the rule pf consequence must be applied to one of the assertions first), or more domain knowledge must be provided by the user.  6.4  Extending Trajectory Evaluation Algorithms  The core of the practical tool proposed here is the ability to perform trajectory evaluation to check assertions of the form |= t\ g=z>h where g and h are TL formulas (actually T L forn  mulas since we are dealing with circuit models). The basis of these algorithms is the trajectory evaluation facility of Voss, which can compute results of the form j= (\ A=^>C ), where A and C are trajectory formulas. There is a trade-off between how efficiently trajectory evaluation can be done, and the class of assertions that can be checked. This sectionfirstdescribes and justifies the restrictions placed on assertions, and then outlines three possible algorithms that can be used to extend Voss's STE facility. (The advantages and disadvantages of these algorithms are discussed in Section 7.6 after the presentation of experimental evidence.) 6.4.1 Restrictions What are the problems in determining whether (= \ g=z>h ^?  Chapter 6. Developing a Practical Tool  129  First, STE computes the =g> relation, rather than the =D> relation. However, as shown in Section 4.4.3, if only the realisable fragment of T L is used, there is an efficient way to deduce n  the = o relation from the =^> relation. The limitation to the realisable fragment means that users cannot explicitly check whether a component of the circuit takes on an overconstrained value. But, the nature of the circuit model means that this is checked for implicitly in any trajectory evaluation. The underlying trajectory evaluation engine can easily check for antecedent failures by testing whether a Z appears in the defining trajectory of the antecedent. Thus, I argue that this limitation is not a severe restriction, and worth the price. Second, allowing a general TL„ formula in the antecedent can be very costly since it may require numerous trajectory evaluations to be done. Recall from Chapter 4 that computing the defining sequence sets of a disjunction is done by taking the union of the defining sequence sets of the disjuncts. Thus, the cardinality of the defining sequence sets is proportional to the number of disjuncts. At first sight, it may seem that in practice that the structure of formulas is such that this will not be a real problem. For circuit verification, how many formulas have more than a dozen disjuncts (a number of sequences that could probably easily be dealt with)? However, this is misleading since while disjuncts may not appear explicitly in a formula, they may actually be there, particularly when dealing with non-boolean data types. For example, a predicate on an integer data type such as [I] + [ J] = k -f / + m can translate into a very large number of disjuncts, even for moderate sized bit-widths. Thus, for performance reasons, one restriction placed on formulas is that trajectory evaluation is only done for assertions that have trajectory formulas as antecedents, i.e. for formulas g such that the cardinality of A*(<7) is one. Besides the performance justification, experience with STE verification has shown that the main need for enriching the logic is to enrich the consequents rather than the antecedents. Moreover, the use of the compositional theory allows the  Chapter 6. Developing a Practical Tool  130  enriching of antecedents indirectly (for example through the disjunction, until, and general transitivity rules). Nevertheless, even though experience so far with STE has not shown this to be a significant restriction, this is an undesirable restriction, and more work needs to be done here. The final restriction is made with respect to the infinite operators such as Global. Since the state space being modelled is finite, all trajectories must have repeated states; thus, in principle, it is only necessary to investigate a prefix of a trajectory. However, this requires knowing when a state in a trajectory has been repeated. Since, in the tool, symbolic sequences represent a number of sequences or trajectories, given a symbolic sequence we have to know for which element in the sequence it is the case that for all interpretations of variables there have been repeated states. The parametric representation of state is unsuitable for this computation, which requires the characteristic representation to be used. However, if the characteristic representation is to be used, then the advantages of STE over other model checking approaches is reduced. If infinite formulas must be tested, other approaches may well be more suitable. Moreover, for hardware verification, infinite operators are less important than in more general situations since timing becomes more critical. We are not interested that after a given stimulus, output happens some time in the future; we want to know that output happens within x ns of input. Trajectory evaluation's good model of time and its ability to support verifications where precise timing is important is one of its great advantages. Finally, in the same way the compositional rules can be used to enrich the antecedent, they can be used to allow the infinite operators to be expressed usefully (the until rule and its corollaries are good examples here). In summary, the STE-based algorithms proposed here check assertions of the form |= \ A = o h ), where 1. A and h are in the realisable fragment of TL„; 2. A is a trajectory formula; and 3. h does not contain any infinite operators.  Chapter 6. Developing a Practical Tool  131  Through the use of the compositional rules, the limitations of (2) and (3) can be partially overcome. The rest of this chapter examines how Voss's ability to check formulas |=  ^ A=g>C  } can  be used effectively. Three algorithms are presented. 6.4.2  Direct Method  If A and C are trajectory formulas, the standard use of STE for model checking trajectory assertion of the form t\ A=>C } is straight-forward since the cardinality of A (A) (and hence t  T (A)) and A (C) are one. 8 and 8 are constructed, and r is computed from 8 . The last k  k  A  C  part of the verification is to check whether 8  A  C  A  QT . A  Where we choose the consequent to be a general formula g of TL„, we need to consider the entire set A (g). However, the basic idea is the same: construct 8 and compute r , and then t  A  A  check whether \/8 e A (g), 8 C T . HOW this is done is sketched in the pseudo-code below: t  A  Compute(g,j)= case g of gohgi Next g -ic/  : : : :  r/[i] = H Compute^, j) A Compute^, j) Compute^, j + - Compute(#,j)  t  :  t  f  : f  [t]  This algorithm is simple and straight-forward, although care must be taken in implementations to ensure efficiency, particularly when dealing with ADTs such as vectors and integers, and derived operators such as the bounded versions of Global. First, only necessary information must be extracted from T . Second, a very important optimisation in the Voss tool is event A  scheduling — usually from one time step to the next only a few state holding elements change their values. By detecting that components are stable for long periods of time, much work can be saved. Any modifications to the STE algorithm must not interfere with this.  Chapter 6. Developing a Practical Tool  132  The way this was implemented in the prototype tool was to (i) determine from the consequent which state components are important, (ii) use Voss's trace facility to obtain the values of those components at relevant times, and (iii) compute whether the necessary relationship holds. All of this can be done in FL on top of Voss, obviating the need to make any changes to the underlying trajectory evaluation algorithm. Not only did this choice of implementation make developing the prototype much easier, but more fundamentally it means that the event scheduling capacity of Voss is not impaired in any way. As a side note, modifications to this approach to deal with the infinite operators is, in principle, straightforward. At each step in the trajectory evaluation the set of reachable states is added to. Once afix-pointis reached, the trajectory evaluation can stop. The use of partial information might improve the performance of the modification (in some cases — but not all — once a state s has been explored, we need not visit any state above s in the information ordering). Provided we are prepared to pay the cost of computing the characteristic representation of the state space, this is feasible, although care must be taken not to conflict with the event scheduling feature of Voss. 6.4.3 Using Testing Machines An alternative way of extending STE is through the use of testing machines. The goal is to determine, given a model M, whether \=  M  \ A=^>g  The idea behind testing machines  is to answer this question by constructing a model M' and a trajectory formula C such that g  \= , § A=^>C ) if and only if \=M \ A=^>g [>. An analogous approach is the one adopted M  g  by Clarke et al. in extending their CTL model checking tool by using tableaux so that LTL formulas can be checked [38]. Other verification techniques also use this idea of using 'satellite' or observer processes to capture properties of systems [9]. As an example, using only trajectory formulas STE can check whether a node always takes  Chapter 6. Developing a Practical Tool  133  on a certain value, y, say; it cannot check that a node never takes on the value y since the corresponding predicate is not simple and thus the question cannot be phrased as a trajectory formula. However, suppose the circuit were to have added to it comparator circuitry that compares the value on the node to y and sets a new node ./V with H if the node doesn't have the value y and L if it does. To check whether the node takes on the value can now be phrased as a trajectory formula. This section gives a brief outline of this, and detail can be found in Appendix B. As presented, model checking takes a model and a formula and then performs some computation to check whether the model satisfies the formula. The basic motivation behind testing machines is that some of the computation task can be simplified by moving the computation within the model itself. In essence what we do is construct a circuit that performs the model checking, compose this circuit with the existing circuit and then do straightforward model checking on the new circuit. This task is simplified by the close relationship between Q and C. There are thus two steps in the model checking algorithm. Thefirstis to take the formula and to construct the testing machine; the second is to compose the testing machine with the original circuit and to perform model checking. The construction of the testing machine is done recursively based on the structure of the formula. An important part of the algorithm is constructing the testing machines for the basic predicates. For predicates dealing with boolean nodes, the construction is straightforward, essentially doing a type conversion. For other types — especially integers — it is somewhat more complex; for example, integer comparator and arithmetic circuits are needed. Given the testing machines for the basic predicates, there is a suite of standard ways in which these testing machines are composed, depending on the structure of the formula. One of the complexities is dealing with timing information. For example, in the formula g V h, g and h may be referring to instants in time far apart. This will mean that the testing machines that compute g and h will probably produce their results in time instants far apart,  Chapter 6. Developing a Practical Tool  134  which in turn means that some sort of memory may be necessary. If the formula g V h is nested within a temporal operator such as the bounded always operator, it may be necessary to compute the values of g V h for many instants in time, which means that a large number of results may need to be stored temporarily. This will affect the computation and memory costs of model checking. The testing circuitry does not deal with the unbounded operators Global and E x i s t s . The method of Section 6.4.2 could deal with these operators by recording the set of states already examined and other information. Testing circuitry could be built that duplicates that. As the state space is finite, we know that at some finite time all states will have been examined, but since the operators are unbounded it cannot be determined a priori at which instant in time all states will have been examined. Thus the scheme for examining the testing circuit at a particular moment fails. It seems that this can only be dealt with by modifying the STE algorithm. 6.4.4  Using Mapping Information  Suppose that A and C are trajectory formulas which have the property that no boolean variable in C appears in A. Let $ i = (j A=t>C ). This is the set of assignments of boolean variables to values for which A=f>C. In particular, it describes the relationship between the variables in the antecedent formula and the variables in the consequent formula which must hold for the trajectory evaluation to succeed. C essentially extracts relevant components of T , SO by making C general enough, enough A  useful mapping information can be used to make model checking TL„ feasible. Provided enough information is extracted, we can use $ i to determine whether (= t\ A=c>g holds: if for all interpretations in $ i , g holds (this is formalised later), and provided some side conditions hold, then so does |= ^ A=$>g ). An example will illustrate the method.  135  Chapter 6. Developing a Practical Tool  J K L  M N  Figure 6.3: A CSA Adder The Carry-Save adder (CSA) shown in Figure 6.3 is used in a number of arithmetic circuits. An n-bit CSA adder consists of n independent single-bit full adders. For simplicity in the example, we consider a one-bit adder. Suppose that at time 0, the state of the circuit is such that node J has the value j, node K has the value k, and L has the value /. Then at time 1, the state of the circuit should be such that M has the value j © k © / ( 0 representing exclusive or) and N has the value j A k V j A / V k V /. This is easy enough to verify using a trajectory formula. However, there are verifications in which what we are interested in is not what the particular values of nodes M and N are, but that the sum of the values of nodes J, K and L at time 0 is equal to the sum of the nodes M and N at time 1. (This is exactly what we are interested in when verifying a Wallace-tree multiplier). In STE, this property can not be verified directly. Define the trajectory formulas A and C by: A=(J  = j)A(K = k)A(L = I)  C = Next (M = m) A(N = n) and then compute $ i = \ A=t>C $ i gives the constraints which must hold for the trajectory evaluation to hold. In particular it gives the constraints relating j , k and / with m and n. Suppose V#> € $i,(f>(g) = t where g = (j; + k +1 = m + n) (assuming here two-bit addition). If this is the case then we know that for each mapping of boolean variables to values for which the STE holds, (j + k + l = m + n). Or putting this in terms of an expression in T L , that h — Next (j + k + l = [M] + [A ]) holds. 7  n  Chapter 6. Developing a Practical Tool  136  Essentially g is h where we substitute the variables m and n into h to act as place-holders for the state components M and N. C is a way of extracting appropriate values out of TA- SO, if V</><E $ 1 , </>(#)  = t a n d $ i = { A=>C > | then (] A = t > / * |>.  One important check needs to be made - we must ensure that the above condition is not satisfied vacuously by $x being empty or only containing very few interpretations. What we want to ensure is that $ i covers all the interesting cases: that for every possible assignment of values to the boolean variables j , k and /, there is an assignment to the boolean variables m and n such that the trajectory evaluation holds. This is formalised now. Definition 6.1.  Let U be a set of variables, and cf>: V —> B be an interpretation of variables. The set of extensions of 4> with respect to U is Ext(4>, U) = {tp e  $ : Vu  € V - U, 4>(v) = tp(v)}  •  The condition that $ i is non-trivial can be expressed as: for every interpretation cj> € $, there is an interpretation ip e Ext(4>,v(A)) where v(A) is the set of variables in A, andtp £ $x. In other words for every interpretation of variables of A there is an extension of that interpretation to include variables in C, such that the extension is an element of $ i . Note: If h! is a T L formula not containing temporal operators, then by the remarks precedn  ing Theorem 3.5, then we can consider h' as a predicate from C to Q. For convenience, if h' n  is strictly dependent on nodes  { n i , . . . , n }, r  then we write h\xi,...  ,  x ). r  Theorem 6.1.  Let A be a trajectory formula, and h = Next h' be a T L formula such that h! contains no n  temporal operators. Let h' be strictly dependent on N  = {ni,... ,  n }. r  Let C = Next (r\\[nj) = WJ) where the WJ are distinct and disjoint from the variables in A; let W = {wi,... , w }. Suppose: r  1. $! = ^  A^>C\;  Chapter 6. Developing a Practical Tool  2. V0G $ il>(h'(w ... u  3. V0 G  $ ,  ,w ))  u  30 G  £ ^ ( 0 ,  137  =t;  r  W), and 0  G  4. V ^ € * i ; » = 0 , l ; j = l , . . . , n :  $ i ; and  T/  (  A  ^ Z.  [J]  )  Then (= <] A=t>/i Proof. (1)  V0 G $ , 30 G Ext(<j>, W), 0 G $ i  (2)  V0  (3)  V0 G  (4)  V0 G $ , 3-0 G £ arf(0,PV),(jf [A:]Erf [i],fc= 1,... ,n  G <&,  30  G  Hyp. 3.  Ext(4, W),0(A)=>0(C)  (l),Hyp. 1.  30 G ^ ( 0 , W), <$f • rf (C)  ,  (2), Theorem 4.5.  {A)  (C)  (>i)  (3) , structure of C . N  (5)  V0  G $ ,  30  G £&(<£,  W),  0 K )  • r f ^ k ] , j = 1,... , r (4) , structure of C.  (6)  V0G $,30  G  Ext(4,W)M ) Wj  = T« [nA,j A)  =  l,...,r  (5) , Hyp. 4. (7)  / i ' ( 0 ( u ; i ) , . . . ,V>(uv))  (8)  V0 G  $ ,  30  G £to(0,  = t  Hyp. 2, definition of  VK), ^ ' ( ^ [ m ] , . . . , r  0(/>')•  = t  ^ K D  (6) , (7).  fc'(r*(yi)M,...  [nr]) = t  V0 G  (10)  V0G $ , 5 a < r ^ ) , 0 ( A ) ) = t  (9).  (11)  V0 G $,0(A)=o0(/i)  (10), Lemma4.4, Hyp. 4.  $ ,  , rf  (A)  (9)  (8),  Wj  not in A.  • This theorem can be generalised to deal with assertions such as |= (j A=>( A Next'fy) |) i=i  and implemented.  Chapter 7 Examples  This chapter shows that the ideas presented in this thesis can be used in practice. The verification of the examples done in this chapter requires a relatively rich temporal logic — trajectory formulas are often not rich enough — and efficient methods of model checking. Efficient algorithms for performing STE are essential, but in themselves not enough; the compositional theory for TL is necessary. Section 7.1 presents the verification of a number of simple examples performed using the first prototype verification tool. These examples are used as illustrations of the use of the inference rules. Section 7.2 presents an example verification of a circuit that is well suited for verification by traditional BDD-based model checkers such as SMV. The B8ZS encoder chip verified has a small state space which is easily tractable by the traditional methods. While the circuit can easily be represented as a partially ordered model, it is difficult to use the methods proposed by this thesis to verify this circuit completely. This example shows some interesting points about the need for expressive logics, and shows some limitations of the approach proposed in this thesis. Section 7.3 describes the verification of more substantial circuits, multipliers, which can have up to 20 000 gates. These are circuits that are beyond BDD-based automatic model checkers and require the use of methods such as composition and abstraction. The verification of a number of different multipliers are described and analysed. One of the verified multipliers is Benchmark 17 of the IFIP WG10.5 Hardware Verification Benchmark Suite. Section 7.4 builds on the verification of this multiplier and shows how its 138  Chapter 7. Examples  139  verification is used in the verification of a parallel matrix multiplier circuit (Benchmark 22 of the suite); the largest version of this circuit verified contains over 100 000 gates. The examples mentioned here all show that these methods are well suited to examples where detailed timings at which events happen is known. Section 7.5 shows how time can be dealt with in a more generalised way. Although this section is more speculative in nature (the verification has not been mechanised) it shows that using the inference rules and inducting over time, allows the practical use of TL in a more expressive way. Finally, Section 7.6 summarises the results of this chapter and evaluates the methods proposed. 7.1  Simple Example  7.1.1  Simple Example 1  For thefirstexample, consider the circuit shown in Figure 7.1 which takes in three numbers m , n and o on nodes M, N and O, and produces o + m a x ( m , n) on R. There are three parts to the circuit: a comparator compares the value on M with the value on N and produces H on P if the number at M is bigger than the number on ./V and produces 0 otherwise; a selector takes the values at M, N and O and produces at node Q the value at M if P is set to H, and the value at N otherwise; the third part of the circuit takes the values at node Q and O, and produces their sum at node R. This example is one which could be verified using STE alone, but its small size makes it useful as an example. Verification starts by checking the correctness of the individual components. The verification of each component is done in the presence of the rest of the system, which means that any unintended interference will be detected. These individual proofs are put together using specialisation, time-shifting and transitivity. An outline of the formal proof follows. To simplify notation, a —> b\c is used as shorthand for (a =$> b) A((->a) =>• c).  140  Chapter 7. Examples  M—•/-  Figure 7.1: Simple Example 1  Let 4 ' = ([Af]=m)A([/Y] = n)A([0] = o).  Let A = A'  A  Next .4' A  NextM'.  LetC = Next (ra>rc  [i?] = m + o | [i?] =ra+ o).  3  We wish to show that (= <] y l = o C [>. ( ^,4  (2)  |= §A'A[P] = x =ONext(a;  (3)  M ( [ 0 ] = y)A([Q] = *)=^Next([/i!] = y + ^  =  = > N  e t([P]  By STE  (1)  X  = (m>n))^  [Q] = m | [Q] = n) > |  (4) |= ^ N e x t ( ^ ' A ( [ P ] = m > n ) ) ^ > N e x t ( m > n 2  By STE By STE  -> [Q] = m | [Q] = n ) [>  Time-shift, specialise (2) (5)  |= <j .4 =>Next (m>n ->• [Q] = m | [(?] = n) [}  (6)  ^^Next ([0]  2  2  = o)A(m>n  -> [Q] = m  |  (1), (4), transitivity.  [Q] = n)=>C) Time-shift, specialise (3)  (7)  \ A^>C  \  (5), (6), transitivity.  Perhaps the most interesting part of this proof is how specialisation and transitivity are used. Consider how (1) and (2) are combined. Note that A contains all the information that Next A' does; and note the similarity in structure between [P] = ( m > n) and [P] = x. By time-shifting (2) as well as substituting m > n for x transforms (2) into (4), which can be combined with (1)  141  Chapter 7. Examples  using transitivity. The other place in which specialisation was used was in line (6). Here, using only substitution on line 3 is inadequate; a much richer transformation is needed. Rather than just substituting one expression for z in (3), two different substitutions are made, which are qualified and combined (one substitution is made when m > n, and the other when m <n). This proof was done in the first verification tool, where it is easier to do than manually because the time-shifts and specialisations are found automatically. Steps 4 and 5 are done with a call to one of the automated rules; and steps 6 and 7 with another call to the same rule. A full description can be found in [76], and the FL proof script can be found in Section C l . 7.1.2  Hidden Weighted Bit  The hidden weighted bit problem was one of the first to be proved to need exponential space to verify using traditional BDD-based methods [21]. A circuit for an 8-bit version is shown in Figure 7.2. The verification of this was done in thefirstprototype tool; the proof is outlined here, and a full description including the one page proof script is described in a technical report [76].  -U  Counter  c  ao  10  Chooser  Figure 7.2: Circuit for the 8-bit Hidden Weighted Bit Problem  In this version, the global input  xi,... , x  n  is copied to two buffers. The Counter part of  the circuit computes the number of 1 's on the input (i.e.  Y, Xi). n  The Chooser part of the circuit  Chapter 7. Examples  142  takes the number j output on CountNode (the number is in binary form, hence if there are n input lines, CountNode comprises [lg nj + 1 lines), and outputs the value Xj on Result and on Error when j > 0. If j = 0 then Error is set to 1. Intuitively, a verification of this circuit as a whole takes exponential time and space (in n) because the output value on CountNode is so complicated, in terms of the boolean variables, that no suitable variable ordering can be found so that the Chooser part of the circuit can be verified efficiently. The virtue of the compositional approach is clearly illustrated: by decoupling the verification of the two parts of the circuit, we can choose suitable individual variable orderings for both parts of the circuit; moreover, it is more efficient to verify the chooser circuit for an arbitrary input j (which only needs very simple BDDs to represent it), and then substitute for j the actual input, than to verify for the actual input (which needs more complicated BDDs to represent it). There are five steps in the proof, in which all the time-shifts and specialisations are found automatically. • The proof that the copying of the input to the buffer is correct — BufferTheorem. • The verification Counter part of the circuit — CounterTheorem. • The composition of BufferTheorem and CounterTheorem. This is done in two steps: first, CounterTheorem is time-shifted along so that transitivity between BufferTheorem and CounterTheorem can be used to produce BufferCounterTheorem. BufferCounterTheorem is conjoined with BufferTheorem so that we can use the value of Buffer! at a later stage. Call the result of this stageITheorem. • Verification of the Chooser part of the circuitry — ChooserTheorem. • Composition of stage ITheorem and ChooserTheorem by time-shifting ChooserTheorem by an appropriate amount and specialising this so that transitivity can be used between  143  Chapter 7. Examples  stagel and ChooserTheorem. Results: We verified the circuit for different values of n (4, 8,16, 32, 64,128). For these values, verification takes roughly cubic time (and importantly, space was not an issue). The verification of the 128 bit problem took just under 27 minutes on a Sun 10/51. Compared to this, verification of the system as one unit was not possible for n = 64 or larger. The FL script for the verification is shown in Section C.2 7.1.3  Carry-save Adder  The carry save adder (CSA) shown in Figure 6.3 was verified using all three extensions to the STE algorithm described in Chapter 6. Table 7.3 summarises the computational cost of verification of a 64 bit CSA.  1 2 3  Algorithm Direct Testing Machine Mapping information  Time (s) 3.8 3.6 2.6  Table 7.3: CSA Verification: Experimental Results The experiments were run on a DEC Alpha 3000, and show that for all three approaches, verification is easily accomplished. The FL script for this is shown in Section C.3. Note that the compositional theory is not used to verify this circuit. 7.2  B8ZS Encoder/Decoder  This example shows the verification of a B8ZS encoder, a very simple circuit but one which would be very difficult to do in traditional STE and illustrates some points about the style of verification. Note that the compositional theory is not used to verify this circuit.  Chapter 7. Examples  7.2.1  144  Description of Circuit  Bipolar with eight zero substitution coding (B8ZS) is a method of coding data transmission used in certain networks. Some digital networks use Alternate Mark Inversion: zeros are encoded by '0', and ones are encoded alternately by '+' and '—'. The alternation of pluses and minuses is used to help resynchronise the network. If there are too many zeros in a row (over fifteen - something common in data transmission) the clock may wander. B8ZS encoding is used to encode any sequence of eight zeros by a code word. If the preceding 1 was encoded by ' +', then the code word '000+-0-+' is substituted; if the preceding 1 was encoded with a '—', then the code word is '000—1-0+-'. Using this encoding, the maximum allowable number of consecutive zeros is seven. The implementation of the circuit is taken from the design of a CMOS ZPAL implementation of the encoder (and corresponding decoder) by Advanced Micro Devices [4]. The encoder comprises two parts. One PAL detects strings of eight zeros and delays the input stream to ensure alignment. If thefirstPAL detects eight zeros, the second PAL encodes the data depending on whether eight zeros have been detected or not. Figure 7.3 given an external view of the chip. The inputs are a reset line (active low), and N R Z J N which provides the input. There are two outputs, PPO and NPO which as a pair represent the encoding: (1,0) is the '+' encoding of a one, (0, 1) is the '—' encoding of a one, (0,0) encodes a zero, and (1, 1) is not used. Output emerges six clock cycles after input. 7.2.2  Verification  There are two questions one could ask in verification: 1. Does the implementation meet its specification? Here we want to check that the output we see on NPO and PPO is consistent with the input.  145  Chapter 7. Examples  RST  PPO  NRZJN  NPO  Figure 7.3: B8ZS Encoder  2. Does the implementation have the properties that we expect? (Specification validation) In particular is it the case that: • At no stage are there eight consecutive (PPO, NPO) pairs which encode a zero; • At no stage are therefifteenor more consecutive zeros on the PPO output; and • At no stage are therefifteenor more consecutive zeros on the NPO output. Checking that the implementation meets the specification is a bit tricky, and shows the need for a richer logic than the set of trajectory formulas. With trajectory formulas, the obvious way to perform verification is to examine the output and check to see that the output produced is determined by the finite state machine which the PALs implement. However, the equations of the FSM are complicated and non-intuitive. Verification that the implementation is 'correct' doesn't give us information about the specification. Worse, essentially the verification conditions would be a duplicate of the implementation, increasing the likelihood of an error being duplicated. And there don't seem to be easier, higher level ways of expressing correctness using trajectory formulas since the circuit has the property that the n-th output bit is dependent on the first input bit. Using the richer logic, a far better way of verifying the circuit is to show that the input can be inferred from the output. Suppose that we want to check the output bit pair at time k (recall  Chapter 7. Examples  146  that the output is encoded as the (PPO, NPO) pair). If this bit pair is in the middle of one of the code words then the input bit at timefc— 6 must be a zero; otherwise the (fc — 6)-th input bit can be inferred directly from the value of the bit pair. The testing machine method was used in verification. To test that the bits are correctly translated, the prooffirstshows that after being reset the encoder enters a set of reachable states, and that once in a reachable state the encoder remains in this set of states. Next, the proof shows that if the encoder starts in the reachable set then the output of the encoder is correct. The computational cost of all of this is approximately 30s on a Sun 10/51. The second step is to check that the implementation has properties that cannot be directly inferred from the design. In particular we want to show that at no stage are there eight or more zeros consecutively produced by the encoding of PPO and NPO and also that if we look at PPO and NPO individually that at no stage are therefifteenor more zeros consecutively. These conditions can be expressed succinctly in TL, while they could not be expressed as trajectory formulas. The major restriction here is that using testing machines, the antecedent can only be a finite formula. We cannot check that this result holds for arbitrary input. What we can show is that given arbitrary input of length n the circuit has the properties we expect. Using testing machines, verification for n = 100 presents no problem (10s on a Sun 10/51). In principle, the direct method could verify the general case. The final verification that was done was to implement the complementary B8ZS decoder and to check that when the output of the encoder is given as input to the decoder, then the output of the decoder is just the input of the encoder, suitably delayed. Again, it was possible using the testing machine method to check this forfiniteinput prefixes. An error was detected: the initial states of the encoder and decoder are not synchronised. If the first eight input bits given the encoder are zero, the code word used by the encoder is '000+-0-+'; however, the decoder expects the other code word to be used if the first eight bits are zero. This error only occurs when  Chapter 7. Examples  147  the first eight bits are zero as the state transition table of the decoder has the pleasant property that the first encoded 1 (either a '+' or ' —') emitted by the encoder synchronises the decoder. This example illustrates some interesting points about verification. However, it is not a good example for trajectory evaluation; since the state space of the circuit is quite small (fewer than 20 state holding components), other verification methods work well. 7.3  Multipliers  Since BDDs are not able to represent the multiplication of two numbers efficiently [21], automatic model checking algorithms find the verification of multipliers very challenging. For this reason, multipliers have received much attention in the literature. The methods proposed in this thesis have been used successfully to verify a number of multipliers: three of these examples are briefly discussed, and then one case is presented in great detail. The section concludes by comparing these verification studies to other work. 7.3.1  Preliminary Work  The first multiplier verified using a compositional theory for STE was a simple n-bit multiplier consisting of n full adders. The verification is accomplished by using STE to prove that each adder works correctly, and then by applying the inference rules to show that the collection of adders performs multiplication. The key inference rules used were time-shifting, specialisation, transitivity, and rules of consequence. Of immense practical importance in the prototype tool used to perform the verification was the ability to use a simple theorem prover coupled with some decision procedures to reason about integers. This enabled the tool to break the limitation of BDDs. Also important for ease of use of the system is that specialisations and time-shifts were all found automatically by the tool.  Chapter 7. Examples  148  The complete verification of this 64 bit multiplier took just less than 15 minutes of CPU time on a Sun 10/51. For this verification, trajectory formulas were sufficient to express all needed properties. A full description of the verification, including the proof script can be found in a technical report [76]. The next step — the verification of a Wallace tree multiplier [66] — showed the need for a richer logic. A Wallace tree multiplier uses Carry-Save adders (CSA) as its basic components. Example 5.9 indicated that what is important in the verification of a CSA is to show that the sum of the two outputs is the sum of the three inputs. This cannot be represented as a trajectory formula. What trajectory formulas can represent is the particular values of each output, which is not helpful. As a preliminary test, the mapping method was used to extend the expressiveness of trajectory evaluation based verification, and the verification completed. The implementation of the prototype algorithm was not particularly efficient, but the need for a richer logic, and the feasibility of the approach was demonstrated. 7.3.2 IEEE Floating Point Multiplier One of the largest verifications done using the theory presented in this thesis is the verification of an IEEE compliantfloatingpoint multiplier by Aagaard and Seger [2]. The multiplier, implemented in structural VHDL, includes the following features: • double precisionfloatingpoint; • radix eight multiplier array with carry-save adders; • four stage pipeline; and • three 56-bit carry-select adders. The circuit verified is approximately 33 000 gates in size.  Chapter 7. Examples  149  The verification was done using the VossProver, a proof system built by Seger on top of Voss. Based on the first prototype tool discussed here, this implements the theory presented in [78], augmented by using the mapping approach to allow a more expressive logic than trajectory formulas. The VossProver contains extensive integer rewriting routines, which are very important in verification proofs. Aagaard and Seger estimate that verifying the circuit took approximately twenty days of work. The computational cost of the verification was reasonable (a few hours on a DEC Alpha 3000). 7.3.3  IFIP WG10.5 Benchmark Example  Description of Circuit Benchmark 17 of the IFIP WG10.5 Benchmark Suite is a multiplier which takes two n-bit numbers and returns a 2n bit number representing their multiplication. This description is heavily dependent on the IFIP documentation.  1  Let A = a „ _ ! . . . a  iao  and B =  . . . Mo- Then A x B = Y^To (EJ=o Vmbj). 2i  Implementing this is straightforward: the basic operation is multiplying one bit of A with one bit of B and adding this to the partial sum. The component that accomplishes this basic operation takes four inputs: a One bit of the multiplicand, b One bit of the multiplier, c One bit of the partial sum previously computed, CIN  A one bit carry from the partial sum previously computed;  and computes a * b + c + CIN producing two outputs: S One bit partial sum, and 1  ftp://goethe.ira.uka.de/pub/benchmarks/Multiplier/  Chapter 7. Examples  150  COUT One bit carry. The equations for the output are: S = a Ab ® (c® CIN) COUT = a Ab A c V a Ab A CIN V c A CIN The implementation of the equations (as given in the IFIP documentation) and the graphical symbol used to represent these components is presented in Figure 7.4.  S  COUT  Figure 7.4: Base Module for Multiplier  A vector of these components multiplies one bit of B with the whole of A and adds in any partial answer already computed. It might seem appropriate rather than just having a vector of these components to also have an adder which added in carries from less significant columns to the results of more significant bits. The problem with doing that is that each stage would be limited by the need for possible carries from the least significant bit to be propagated to the most significant bit, with concomitant increases in the time and number of gates needed. The approach used in the implementation is to produce two outputs: the first output is the sum of the bit-wise addition of the two inputs, ignoring the carries; and the second output is the  151  Chapter 7. Examples  carries of the bit-wise addition. Both of these outputs are forwarded to the next stage; here the carries are added in and new carries generated. We can consider the vector of S outputs as one n-bit number and the vector of COUT outputs as another n-bit number. If we consider stage A; by itself, if the vector of a inputs is x, if the b inputs are all the bit y, and if the vector of c inputs is z, then we shall have that S + 2  k+1  COUT = xy + z. (This is something that must be proved  in the verification.) These components are arranged in a grid (Figure 7.5 shows how a 4 bit multiplier is arranged). The multiplier contains n stages, each of which multiplies one bit of B with A and adds it to the partial result computed so far. After k stages, n + k bits of the partial answer have been computed. The components making up each stage are arranged in columns in the figure. The components making up a row compute one bit of the final answer; carries from less significant bits are added in, and any generated carries are output for the more significant rows to take care of. In the Figure 7.5, each of the base components is labelled with indices: i: j indicates that the component is the j-th component of the z-th stage. Having passed through n stages, the full multiplication has been computed. However, as the final stage still outputs two numbers, the carries must now all be added in. Therefore the final step in the multiplier is a row of n — 1 full adders that adds in carries. These full adders are labelled F A in Figure 7.5. The implementation of the circuit was done in Voss's EXE format as a detailed gate-level description of the circuit. A unit-delay model was used, although this is essential neither to the implementation nor the verification.  Chapter 7. Examples  A(3 0) 74>  \Ground COUT a  "3:34 c  CIN  FA 4 2  COUT  COUT  ^2:34  "3:24  c  c  CIN  COUT  COUT  a  (3) a  com  COUT  "1:34  "2:24  "3:14  c  c  CIN  CIN  CIN  COUT  COUT  COUT  "1:24  "2:14  "3:04  c  c  c  c  CIN  CIN  COUT  COUT  a  "0:24  "1:14  ^2:04  c  c  c  CIN  CIN  COUT  (3)  CIN  COUT  a  a  FA.4  a  "0:34  a  FA,4  a  c  CIN  CIN  COUT a  *0:ls|  "1:04  c  c  CIN  <°>  CIN  CIN  COUT a  "0:0*1  Am  c  cm 111.  <3>  Figure 7.5: Schematic of Multiplier  Out(7..0)  Chapter 7. Examples  153  Verification This section presents a detailed description of the verification of the four bit multiplier presented in Figure 7.5. This example is small enough that the complete proof can be described, and this is useful to show how the inference rules are used. However, the example is big enough that there is some tedium involved too; it must be emphasised that in practice the verification is done using FL as the proof script language, which alleviates much of the tedium. It is also worth mentioning that the verification of a four bit multiplier is well within the capacity of trajectory evaluation. Although the proof is not independent of data path width since issues of timing are important, it may be useful to do the verification for a small bit width first using trajectory evaluation by itself. Identifying structure  Using the inference rules relies on using the properties of integers to  break the limitations of BDDs. Therefore, thefirststep in the proof is to identify some structure, in particular to identify which collections of nodes should be treated as integers. Notation: BM(i: j)(x) refers to node x in the basic module i: j; FA,(x) refers to node x of the full adder FAi. For each stage, we consider the collection of a inputs as an integer, the collection of b inputs as an integer, and so on . . . Similarly, the collection of S outputs and COUT outputs are both considered as integers. Table 7.4 presents the correspondences. The following bit vector variables are used: a  stands for the bit vector ( a , . . . , a ) ; 3  0  b stands for the bit vector (6 ,... ,b ) (a and b are the inputs to the circuit); 3  0  c stands for the bit vector (c ,... , c ); 7  0  d stands for the bit vector (d ,... , d ). 2  0  If A is a bit vector, then N(i) refers to the z-th least significant bit (so N(0) is the least 7  significant bit), and N(i..j) refers to the (sub)bit vector (N(i),...  , N(j)). We also use the  154  Chapter 7. Examples  Integer node A B 0 RSi RC{  Vector of bit nodes The four bit integer input The four bit integer input Output of the or gate S output of stage i for i = 0,... ,3 (BM(i: 3)(5),... ,BM(i: 0)(S),BM(i - 1 :0)(S),...,BM(0:0)(S)) COUT output of stage i for i = 0,... ,3 (BM(i: 3)(COUT),... , BM(i: 0)(COUT)  RS  4  The output Out {0,FA ,... ,FA ,5M(3: 0)(5),... ,5Af(0: 0)(S)) 2  0  Table 7.4: Benchmark 17: Correspondence Between Integer and Bit Nodes short hand that Rd = d is short for i?C;(2..0) = d (i?C; is four bits wide, d is three bits wide). Defining this correspondence has two advantages: the level of abstraction is raised since the verifier can think in terms of integers rather than bit vectors; and the verifier can use properties of integers to prove theorems without having to convert everything into BDDs. Anomalies in circuit implementation  There are a number of aspects of the circuit that can  be criticised and improved. The most obvious is that BM(i: 3)(COUT) = 0 for all i. In turn, this means that one of the inputs to the or gate is always 0, i.e. RS (7) depends entirely on 4  FA (COUT). The only advantage of this implementation is that it makes the circuit description 2  (slightly) more regular. The cost is the extra circuitry and time required to perform the computation. Furthermore, this implementation makes the proof more complicated. The final step in the proof below will be to show that since RS3 + 2 RC = ab that RS = ab; this is only true beA  3  4  cause the one input to the or gate is zero. Therefore, as the proof is constructed, we shall prove that BM(i,3)(COUT) = 0, complicating the proof slightly. A better implementation would have meant a simpler proof.  155  Chapter 7. Examples  The Proof  Stage 0 The first step is to show the first stage performs the correct multiplication/addition.  (7.1) 2  1  [RCo] = ab{0) A [RCp] (3) = 0)}  To make STE as efficient as possible, we use as little information as possible by considering only one bit of b. However, at a later stage we shall want to use all the bits of b, so the next step is to include the rest of b in the result. There are a number ways of doing this. One would be to use the identity rule to show that B has any value imposed on it and then use conjunction with Result 7.1. However, in this case it is easier to use one of the rules of consequence (Theorem 5.18) and strengthen the antecedent.  |= By rule of consequence from Result 7.1 (7.2) 2 [RC } = ab(0) A [RC ](3) = 0)) 1  o  o  This use of the rule of consequence relies on Lemma 5.27, and is motivated by the fact that the antecedent of Result 7.2 uses more information than that of Result 7.1 Stage 1 Thefirststep is to show Stage 1 performs the correct multiplication/addition. Note, the proof is done for arbitrary input for RS and RC rather than the actual input. This is im0  0  portant because STE is used to do the proof; if the actual input (which is a function of A and B) were used, in general STE would not be able to cope.  156  Chapter 7. Examples  |= By STE <] Global [(3,100)] [A] = a A [B](l) = bi A [RS ] = c(3..0) A [RC ] = d A [i?Co](3) = 0 0  =^  (7.3)  0  Global [(6,100)] [fl,S ] + 2 [ RCi] = c(3..0) + 2 6/ + 2 a6(l) A [i?Ci](3) = 0} 2  1  1  1  J  In proving this result, STE is used; this implies that BDDs are used to represent data as this is necessary for STE. However, once the proof is done, the result is only stored symbolically, and the BDDs used to represent Result 7.3 are garbage collected. Having proved this, we now combine Results 7.2 and 7.3 using a combination of transitivity and specialisation. This is useful to do since we know something about the values of RS and 0  RC ; it is feasible to do since the consequent of Result 7.3 is strictly dependent on the nodes 0  RSo and RC — this means that Generalised Transitivity — Theorem 5.31 — can be used. 0  Informally, Theorem 5.31 says that c(3..0) + 2 d = ab{0). 1  \= By Generalised Transitivity 4 Global [(0,100)] ([A] = a A [B] = b) =>  Global [(6,100)] [RSi] + 2 [RC ] = ab(0) + 2 ab(l) A [RC }(3) = 0^ 2  1  l  1  Now we have the output of stage 1 solely in terms of a and b. This can be rewritten into a more elegant form. The proving system has integer rewriting procedures which automatically rewrites ab(n—1..0) + 2 ab(n) as ab(n..0). Thus applying Lemma 5.24 and the rule of consen  quence, Theorem 5.18, yields the next result:  Chapter 7. Examples  157  |= By rule of consequence (] Global [(0,100)] ([A]-a =>  A[B] = b)  Global [(6,100)] [RSi] + 2 [RC ] 2  1  Stages 2 and 3  = ab(1..0)  A [RCi](3)  = 0\  The steps are exactly the same as stage 1.  |= By STE ^ Global [(6,100)] [A] = a A [B(2)] = b A[RS ] 2  =>  1  = c(4..0) A[RC }  = d A[RC ](3)  1  = 0)  1  (7.6)  Global [(9,100)] [RS ] + 2 [RC } 3  2  2  = c{4..0} + 2 d + 2 ab(2) 2  2  A [RC ]{3) 2  = 0)  |= By Generalised Transitivity (Results 7.5 and 7.6) 4 Global [(0,100)] ([A] = a A [B] = b) =0>  (7.7)  Global [(9,100)] [RS ] + 2 [RC } 3  2  2  = ab(l..0)  + 2 ab(2) 2  A [RC }(3) 2  = 0\  \= By rule of consequence from Result 7.7 d Global [(0,100)] ([A] = a A[B] = b)  (7.8)  Global [(9,100)] [RS ) 2  + 2 [RC } 3  2  = ab(2..0)  A [RC ](3) 2  = 0)  158  Chapter 7. Examples  |= By STE 4 Global [(9,100)] [A] = a A [B] (3) = 6 A [RS ] = c(5..0) A [#C ] = 3  = 0  2  2  A [i?C ] (3) = 0 2  (7.9)  Global [(12,100)] [RS ] + 2 [RC ] = c(5..0) + 2 d + 2 a6(3) A[flC ](3) = 0^ 4  3  3  3  3  3  |= By Generalised Transitivity (Results 7.8 and 7.9) ^ Global [(0,100)] ([A] = aA[B] = b) =£>  (7.10)  Global [(12,100)] [RS ] + 2 [RC ] = ab(2..0) + 2 ab(3) A [RC ](3) = 0ty 4  3  3  3  3  \= By rule of consequence from Result 7.10 { Global [(0,100)] ([A] = a A [B] = b) =>  (7.11)  Global [(12,100)] [RS ] + 2 [RC ] = ab A [RC ](3) = 0 > [ 4  3  3  3  The adder stage The final step in the proof is to ensure that the last, adder stage, adds in the carries correctly. Here possible carries in the least significant bit must be passed to the most significant bit. For large bit widths, this adder stage may take tens or hundreds of nanoseconds, so timing may be important here.  |= By STE ^ Global [(12,100)] ([#$,] = c(6..0) A [RC ] = d A [RC ](3) = 0) 3  =f>  3  Global [(22,100)] ([RS ] = (c(6..0) + 2 d)(7..0))) 4  4  Now, using general transitivity, we have:  (7.12)  159  Chapter 7. Examples  Bit width Number of gates  4 8 16 32 64  D Time (s) T Time (s)  135 473 1841 7265 28865  3.9 9.8 36.0 168.7 1081.9  5.4 15.0 60.8 371.4 > 6000  Table 7.5: Verification Times for Benchmark 17 Multiplier  |= By Generalised Transitivity (Results 7.11 and 7.12) <] Global [(0,100)] {[A] = a A[B] = b) =>  (7.13)  Global [(22,100)] ([RS ] = a * b) ) 4  Again, the automatic rewrite systems recognises that ab is an eight bit number, and so rewrites a * 6(7..0) as a * 6. This concludes the proof. Appendix C has the FL proof script for the multiplier example. Experimental results and comments  This IFIP WG10.5 Benchmark 17 multiplier was veri-  fied for a number of bit widths (the n bit width case multiplies two n-bit numbers and produces a 2n bit number). The time taken to perform the verification on a DEC Alpha 3000 is shown in Table 7.5: the column labelled 'D Time' shows the time taken using the direct method, and the column labelled 'T Time' shows the time taken using the testing machine approach (all times shown in seconds). These results are useful for evaluating the testing machine approach, and are used in the discussion on testing machines in Section 7.4.4. The proof script itself is short (less than 200 lines, about 50 of which are declarations) and straightforward to write, relying only on simple properties of integers. The full script can be found in Section C.4. Once structure in the circuit is identified by associating integers with collections of bit valued nodes, the verification no longer has to deal with bits, and at no stage  Chapter 7. Examples  160  does the verification have to concern itself with how the full adders or the base components are actually implemented. The reason why STE cannot deal with the verification by itself is not because of the size of the circuit; the problem is that there is no good variable ordering for the multiplication of two bit vectors. However, good variable orderings are definitely possible for verifying the individual components of the multiplier with STE, and good heuristics to find good ordering can easily be automated. 7.3.4 Other Multiplier Verification One of the main examples used in this thesis is the verification of a multiplier circuit. To put the thesis work in context, other work on multipliers is surveyed. Multipliers represent an important class of circuit, because arithmetic circuits are in themselves important, and because they are particularly challenging for BDD-based approaches. Simonis uses a simple proof checker to verify a multiplier in [118]. The circuit description is represented in a Prolog-like language, and the correctness proof simulates a hand proof: nine correctness conditions are identified and checked (although it is not proved that these nine conditions imply that the multiplier works correctly). Each of the conditions is checked by a Prolog routine. Although the computational costs of verification were low, the correctness of the proof relies on the correctness of the nine conditions and the correctness of the Prolog routines. Timing is not checked. Pierre presents the verification of the WG 10.5 multiplier in [108,109]. The proof is done in the Boyer-Moore prover Nqthm. The work presented is not completely automated in that manual work is needed to translate the behavioural description from VHDL into thefirst-orderlogic used by Nqthm. The proof itself is based on a methodology supporting induction developed by Pierre for the verification of replicated structures. Provided certain design criteria are met, the  Chapter 7. Examples  161  proof can be automatically done by the system. Using replication and induction a general proof can be done for anra-bitmultiplier rather than having to do individual proofs for individual bitwidths. Moreover, the approach is computationally efficient so duplicated work can be avoided. The disadvantage of this approach is that it relies on the VHDL programs being written in a certain way. This is probably not too critical since the restrictions are not unreasonable. More seriously, timing issues are not dealt with. This may be a problem since while the functionality is independent of the bit width, timing is not. As timing is an important part of low-level verification, this approach needs further development. Equivalence methods have also been used to verify multipliers. Van Eijk and Janssen use a BDD-based tool to show equivalence between different implementations of multipliers [30]. Their method relies on (automatically)findingstructural and functional equivalences between different implementations of the circuit. For some circuits they get excellent experimental results. However, they too do not consider timing. Typically, one of the circuits is derived from the other through a number of design steps; thus, the confidence in the verification depends on the confidence on the correctness of the original circuit. Although the compositional method proposed in this thesis relies on some structure of the circuit being identified, it is not necessary to decompose the circuit, or that clearly defined gross structure be determined. To be useful, it is only important to be able to identify circuit nodes with 'interesting values'; this makes it relatively robust to circuit optimisation. An advantage of the compositional theory is that it incorporates a good model of time, which may be important in many applications. This advantage outweighs the disadvantage of having to verify circuit designs for each bit-width, which theorem proving approaches may obviate. As discussed in Section 2.3.3, Kurshan and Lamport also explored combining theorem proving and model checking, and have applied their technique to verifying multipliers [93]. The work was not fully mechanised, and the implementation of the multiplier given at a high level.  Chapter 7. Examples  162  However, although exploratory, their work suggested that combining different approaches would be successful.  7.4  Matrix Multiplier  A filter circuit based on a design of Mead and Conway is Benchmark 22 of the IFIP WG10.5 suite [100]. The filter is a matrix multiplication circuit for band matrices. A band matrix of band width w is a matrix in which zeros must be in certain positions (the matrices contain natural numbers), and the maximum number of non-zero items in a row or column is w. This circuit is called 2Syst. Section 7.4.1 discusses the specification of the circuit; Section 7.4.2 discusses its implementation; Section 7.4.3 presents its verification; and Section 7.4.4 analyses and comments on the verification in which a significant timing error was discovered. Sections 7.4.1 and 7.4.2 rely heavily on the benchmark documentation.  2  7.4.1  Specification  The suite documentation does not give a general specification of the circuit (it does give a general implementation), but presents the case of w = 4. A circuit implemented for a band-width of w can be used to multiply matrices of any size — larger matrices just take longer to multiply; the documentation does not consider the general case, and gives only a specification for 4 x 4 matrices. Let A and B be the two 4 x 4 matrices given below: The URL for the documentation is f t p : //goethe. i r a .uka. de/pub/benchmarks/2Syst/. This section is based on the documentation of this benchmark dated 16 November 1994. As a result of this research, the documentation has been revised, and the new version will be released shortly. 2  163  Chapter 7. Examples  «11  «12  0  0  «21  «22  «23  0  ^31  Ct32  «33  «34  «42  «43  a  0  hi  bu  &13  0  bi  b2 2  ^23  &24  0  b2  ^33  ^34  0  0  C>3  6  2  B =  4 4  3  4  44  and let C — A x B be the matrix:  C  Cn  Cl2  C  i  3  C21  C  C  2  3  C31  C32  C 3  C3  C i  C  C  C  4  2  4  2  2  3  4  3  C14 C  2  4  4  4  4  The external interface of the 2Syst circuit is shown in Figure 7.6. The coefficients of matrix A are input on the inputs aO, . . . , a3, the coefficients of B are input on bO, . . . , b3, and the coefficients of C, the result, is output on outputs cO to c6. (What this picture, taken from the documentation, does not show is that the circuit is clocked and there should be a pin for clock input too.)  Figure 7.6: Black Box View of 2Syst  Chapter 7. Examples  164  Timing  The timing of when and where the inputs must be applied and the outputs become available is critical. The timing for the inputs is presented in Table 7.6. In clock cycles 0 to 3, all the inputs are initialised by having zero applied to them. Then, for the next ten cycles the matrix coefficients are input to the circuit. For example, in cycle 9, the coefficients a , 042, b and 2 3  32  642 are input on pins aO, a3, bO, and b3 respectively, while all other pins have zero applied to them. clock  aO  al  a2  a3  0  4 5  0 0 0  0 0  0 0 0  0 0 0  6  «12  «31  621  7  0 0  0 0  0 0  «42  632  0 0 0 0  0 0  0-3  8 9 10 11 12 13  a 3 2  0 0 0.34  0  flu 0 0 «22  0 0 «33  0 0 044  0 0 0-32  0 0 «43  0 0  bO b l 0 611  0 0 622  0 0 633  643  0 0  0  644  b2  b3  0 0  0 0 0  612  0 0  613  0 0  623  0 0  b4 2  0 0 0 0  634  0 0  Table 7.6: Inputs for the 2Syst Circuit Table 7.7 shows when and where the coefficients of the output matrix can be found. The specification gives some freedom in timing here. It requires that the output be given in clock cycles t ,. . . , h, but does not specify values for the tj \ and, while t < ti... 0  0  < t , the tj need 6  not be consecutive clock cycles. This gives some latitude in the implementation of the circuit. 7.4.2  Implementation  The matrix multiplication C = AxB can be defined in different ways. Assuming for simplicity that A and B are both r x r matrices, the usual definition of C is through defining each C{ - = 3  Chapter 7. Examples  165  cycle  cO  c2  Cl  to  c3  c4  c5  c6  Cll  U  c i  Cl2  h  Cl3 C  2  C31  C22  u  C33  h  c i  C32  C23  i 4  C34  4  C42 C43  C44  te  Table 7.7: Outputs of the 2Syst Circuit Zll=i ikbki- An alternative definition is useful in implementing parallel hardware to perform a  the multiplication: matrix multiplication can also be defined by the recursive equation 7.14. c  {1)  =  _  0 (H-i)  The entries in arrays A and B are n-bit numbers. If the band-width of the matrices is w, the maximum number of non-zero terms in any c,j is w, which means that each entry in c,j is of bit-width m = 2n + r — 1. The basic operation of Equation 7.14 is performing an addition and a multiplication; this is modelled in the implementation, where the basic cell has an integer multiplier and adder to perform this. The external interface of these cells is shown in Figure 7.7. The cell has three inputs: C_In is an m bit number, containing a partial sum; and A_In and B_In are n bit data which are either zero or coefficients of the A and B matrices. A_Out, B_Out are two n-bit output values and C_Out is an m-bit output. If in one clock cycle A_In, B_In and C_In have the values a, b and c respectively, then at the start of the next cycle: A_0ut = a, B_0ut = b, C_0ut = ab + c. Thus, the cell has two purposes: it acts as a one clock-cycle delay buffer for coefficients of the matrices (which are passed on to neighbouring cells), and performs the basic operation of  166  Chapter 7. Examples  C_Out(m..l -  A_In(n..l -  0)  0)  B_In(n..l -  0)  \ B_Out(n..l -  0)  A_Out(n..l -  C_In(m..l -  0)  0)  Figure 7.7: Cell Representation  an addition and multiplication. Figure 7.8 shows how the cells are implemented. Each cell contains a multiplier, an adder, and three registers. The multiplier is the one discussed and verified in the previous section, and the adder is a conventional 2n-bit adder. Each register has an input, an output, and a clock and select pin. By connecting the select and clock pins to the same global clock, the registers become positive-edge triggered: when the clock rises the value at the register's input is latched, output, and maintained until the clock rises again. These cells are connected in a systolic array: each clock cycle cells performs an addition and multiplication and then passes its results to its neighbours for use in the next cycle. The cells are arranged as presented in Figure 7.9, and the timings given in Table 7.6 are designed so that cells get the right inputs at the right time. A simple example will illustrate how this works. To help the description, each cell in the systolic array has been labelled by i: j. The circuit is implemented in Voss's EXE format as a detailed gate level description, using a unit delay model. The implementation is based on the VHDL program given in the benchmark suite documentation.  167  Chapter 7. Examples  n  A_In-  A_Out  Reg B_In-  n B_Out  Mult  Reg Adderj-  -hm  C_In-  m  .  C_Out  clock  Figure 7.8: Implementation of Cell  Example 7.1.  Consider the computation of c i = a i&n + a b2i- In the first three clock cycles the circuit is 2  2  22  initialised so that at the start of the fourth cycle, all inputs have value zero. Cycle 4: &n is input on b l (input B_In of Cell 1:0). ( a is also input in this cycle, but in the u  example, we only consider values contributing to c i). 2  Cycle 5: Cell 1:0 will have passed b to its neighbour, so that 6n now becomes an input for Cell n  1:1. a is input on a.2 (the A_In input of Cell 0:2). 21  Cycle 6: Cell 1:1 will have passed b to the B_In input of Cell 1:2, and Cell 0:2 will have passed n  an to the A_In input of Cell 1:2. At this stage, the C_In input of Cell 1:2 has the value 0. Cell 1:2 therefore computes a b . At the same time, b \ appears as input on bO, which n  is input B_In of Cell 0:0.  n  2  168  Chapter 7. Examples  Cycle 7: Cell 1:2 will have passed a b n  n  to Cell 0:1 as its C_In input. Cell 0:0 will have passed  on £>2i to Cell 0:1 as its B_In input.  a2 2  appears on a l , which is the A_In input of Cell  0:1. Cell 0:1 computes anbn + 022^21 • Cycle 8: Cell 0:1 outputs a bn + a ib \ on its C-Out port (which is c4). n  2  2  c3 aQ ,  1  0:0  , JaOc2 J —  1:0  .bi  ql  2: 0  1:1  Jo2 cO  1  2 :1 2:2  3: 1  3:3  Figure 7.9: Systolic Array  7.4.3  Verification  The verification task can be divided into two parts, the verification of the individual components, and using the verification of the components to show that whole array is correct.  169  Chapter 7. Examples  Verifying the Cells  The verification of a cell must show the multiplier, adder and registers all work correctly. Each cell must be verified individually. This section describes the verification of Cell u:v, and assumes for the sake of this exposition that the clock cycle is 200ns, and the bit-width is 4. In the discussion below, the A_In„„ and B_In „ are four-bit nodes, while all variables are 12 u  bit values. To simplify notation, in all the discussion below, a and b are short hands for a(3..0) and6(3..0) respectively. It turns out that it useful to divide this proof into three parts: • Given value a on A_In „, b on B_In , and c on C_In , one clock cycle later a * b + c u  w  ul)  appears on C_0ut „; u  • Given value a on A_In „, one cycle later a appears on A_0ut ; and ulI  u  • Given value b on B_In „, one cycle later b appears on B_0ut „. u  u  When the cells are connected together, port CJln is connected to C_0ut( )(„ ), port A_0ut „ u+1  uv  +1  u  is connected to A_In („+i), and B_0ut , is connected to B_In( i)„. Therefore, the above veriut  u  u+  fication conditions are rewritten as: • Given value a on A_In„„, b on B_In„„, and  c  on C.Out^+ij^+i), one clock cycle later  a * 6 + c appears on C_0ut „; u  • Given value a on A_In „, one cycle later a appears on A_In (^ ); and u  u  +1  • Given value b on B_In„^, one cycle later b appears on B_In(„ i)„. +  Of course, it is possible to combine all three into one, stronger result. However, having three weaker results makes the proof more flexible since at some stages the proof needs only the weaker result, and using a stronger result would clutter things up and be more inefficient. The costliest part of the proof is to show the multiplier works correctly. As Section 7.3.3 showed how the Benchmark 17 multiplier can be verified, for the purpose of this section, Result 7.15 is assumed (in the actual verification, the multiplier for each cell is reverified).  Chapter 7. Examples  170  |= By various rules d Global [(0,100)] ([k-In ] = a A [B.In ] = b) uv  (7.15)  uv  Global [(22,100)] ([O ] = a*b)) uv  In the cell, the clock has an important effect; to include information of when clocking happens, the rule of consequence is often used to strengthen the antecedent of a result. For convenience, let Clock  k  = Global [(200&, 200Jc + 99), (200(£: + 1), 200(£: + 1) + 99)] ([clock] =f) A Global [(200A; + 100,200/c + 199)] ([clock] = t)  which is the information about clocking which is needed in the proof of the A;-th cycle. This formula says that the clock is low from time 200fc to time 200/?+99, then high from time 200A;+ 100 to 200A; + 199, and then low again from time 200& + 200 to 200A; + 299. Using this idea, Result 7.15 is transformed strengthening the antecedent, as well as taking into account the input on C_In. Although, this is not useful for its own sake, it is useful in using the essence of Result 7.15.  |= By Theorem 5.7 d Global [(0,100)] [A_In „] = a A [B_In„„] = 6 A [C_0ut „ tt  =>  (  +1)(  „ ] = c A Clocks +1)  (7.16)  Global [(22,100)] ([•„„] = a * 6) }  In the next step we show that the adder works correctly and that the output of the adder is latched for the appropriate time. This can be done with one trajectory evaluation. Note that the time  Chapter 7. Examples  171  interval in the consequent could be made bigger, but the one given suffices. |= By STE \ Global [(22,100)] ([0 „] = d(7..0> A [ C _ 0 u t U  = 0  (u+1  ) „ i)] = c A (  +  Clock )  (7-17)  0  Global [(200, 300)] ([C_0ut „] = c + d(7..0)) ) u  Results 7.16 and 7.17 are now combined by specialising the latter result (substituting ab for d), and using transitivity. Note that this is just a special case of General Transitivity (Theorem 5.31).  |= By Theorem 5.31 <] Global [(0,100)] ([A_In „] = u  =>  a A [B_In ] = 6 A [ C _ 0 u t uv  ( u + 1 ) (  „  + 1 )  ] =c A  Clock ) 0  (7.18)  Global [(200,300)] ([C_0ut „] = c + a * b) ) u  Result 7.18 is the core result that has to be proved about the cell. The next two results show that the cell also acts as one cycle delay buffers for values of the A and B matrices. Both of these results can easily be done using STE alone.  |= By STE \ Global [(0,100)] ( [ A _ I n „ ] = a A u  = 0  Clock ) 0  Global [(200,300)] ([A_In  u(t;+1)  (7.19)  ] = a) )  \= By STE \ Global [(0,100)] ([B_In ] = b A w  =>  Clock )  Global [(200, 300)] ( [ B _ I n  0  (u+1)  (7-20)  „ ] = b) \  Overall Verification Once each of the cells has been individually verified, the proofs about the individual cells must be combined to prove that the systolic array as a whole works correctly. The proof is modelled on how the systolic array computes its results; in its development the  Chapter 7. Examples  172  proof traces the behaviour of the circuit as it uses its inputs, computes results, and outputs the answers. Consider the operation of one cell, Cell u:v. It has three input neighbours from which it gets values (the boundary cells are special cases and easily taken care of): • Cell u:(y — 1), its A-left-neighbour from which it gets a value of the A matrix, • Cell (u — l):v, its B-right-neighbour from which it gets a value of the B matrix, and • Cell (u + l):(v + 1) its C-down-neighbour from which it gets a partial sum; and three output neighbours to which it gives values: • Cell u:(v + 1), its A-right-neighbour, to which it gives a value of the A matrix, • Cell (u + l):v, its B-left-neighbour, to which it gives a value of the B matrix, and • Cell (u — l):(v — 1) its C-up-neighbour, to which it gives a partial sum; At the beginning of clock cycle k, none, some, or all of the following will be known about Cell u:v's input neighbours (recall that a clock cycle is 200 time units long), where the Ij are antecedent TL formulas, and the 6 are integer expressions: X  |= 4/ =t>Global [(200fc,200fc + 100)] [kJ.n ] = 6 )  (7.21)  |= \ I ^ G l o b a l [(200A;,200fc + 100)] [B_In^] = ^ ^  (7.22)  uv  1  a  2  6  |= \ I = t > G l o b a l [(200fc, 200A; + 100)] [ C _ 0 u t „ x  (  +1)(  „ ]= 6 \ +1)  C  (7.23)  If all three results are known, then we use conjunction on Results 7.21-7.23, and introduce new clocking information. For convenience, let I4 = Ii A I A I A Clockk2  3  This is the conjunction of I I and I and contains necessary clocking information for the k-ih u  2  3  173  Chapter 7. Examples  cycle. Then we have: (= By Conjunction and Rule of Consequence (7.24) Global [(200A;, 200A; + 100)] [A_In „] = 6> A [B_In^] = 6> A [C_0ut„] = 6 . ) a  u  6  C  u  Then Result 7.18 is time-shifted forward by &-clock cycles to get: |= By time-shifting d Global [(200Jfe, 200A; + 100)] ([kJ.n ] = a A [B_In „] = 6 A [C_0ut( )(„ i)] = c A Clock ) uv  =T>  u  u+1  +  k  Global [(200(fc + l),200(fc + 1) + 100)] ([CJDut ] = c+ a * b) |> uv  Using General Transitivity on Results 7.24 and 7.25 leads to: |= By Theorem 5.31 ih =D>  (- ) 7  26  Global [(200(A: + 1), 200(fc + 1) + 100)] ([C_0ut „] = 9 + 6 * d )) u  C  a  b  This is a proof of what Cell u:v computes in the A;-th cycle. In proving what happens in the (k + l)-th cycle, Result 7.26 is used in the proof of the behaviour of Cell (u + l ) : ( u +1), which is Cell u:u's up-C-neighbour. Similarly, if Result 7.21 is known, then precondition strengthening is used to introduce new clocking information to get:  |= By Theorem 5.7 <| h A Clock  (7.27)  k  =>  Global [(200fc, 200& + 100)] [A_In „] = 6 ) u  a  Then Result 7.19 is time-shifted by k clock cycles to get:  174  Chapter 7. Examples  (= By STE (7.28)  <] Global [(200fc, 200k + 100)] ([A_In„„] = a) A C/ocfc  fc  Global [(200(fc + l),200(fc + 1) + 100)] ( [ A _ I n „ ] = a) ) u(  +1)  General Transitivity between Results 7.27 and 7.28 then yields: (= By Theorem 5.7  (7.29)  | / i A Clockk Global [(200(fc + l),200(fc + 1) + 100)] ([A_In , ] = 9 ) ) u(t  +1)  a  This shows what Cell u:v passes to its A-right neighbour at the end of the A;-th cycle, and this result will be used to prove properties of Cell u: (v + 1) in the (k + l)-th cycle. A similar result shows that in thefc-thCell u:v also passes on the value input on its B_In port, (= By various rules  (7.30)  ^ I A Clockk 2  Global [(200(fc + 1), 200(A; + 1) + 100] ( [ B _ I n  F L Proof script  (u+lH  ] = d) ) h  The F L proof script that performs the proof uses the approach outlined above.  First, the behaviour of each cell is individually verified. Then, the proof proceeds by proving properties of the circuit in each clock cycle. A two dimensional array of proofs is kept: at the start of the k-ih cycle, the array's (it, v) entry contains proofs of what the output of Cell u:v input neighbour's are at the end of the (k — l)-th cycle. The proof then uses this information to infer as much as possible about the output of Cell u:v at the end of thefc-thcycle, and this information is then used to update the array of proofs so that Cell u:v's output neighbours can use this information in the (k + l)-th cycle.  Chapter 7. Examples  Start ol cycle  c_0  Cell 3:0  7 8 9 10 11 12 13 14 15 16  175  C_l  Cell 2:0  c_2 Cell 1:0  c_3 Cell 0:0  c_4 Cell 0:1  c_5 Cell 0:2  c_4 Cell 0:3  Cll c i  Cl2  2  C31  Cl3 C14  c i  C22  C23  4  C32  C24  C42 C33 C43  C34  C44  Table 7.8: Benchmark 22: Actual Output Times 7.4.4  Analysis and Comments  The FL proof script uses STE and the inference rules to prove what the output of the circuit is at different stages - this is summarised in Table 7.8. Comparison between Tables 7.7 and 7.8 shows that even given the ability for the designer to choose the values of t\,... ,t , the implementation does not meet the specification. e  There are two possibilities. The easier and probably better solution would be to change the specification, in accordance with the results shown in Table 7.8. However, another solution would be to place one cycle delay buffers on the outputs c_0, c _ l , c_5 and c_6; the amount of extra circuitry is small, would not slow down the circuit, and would lead to a more elegant specification. The proof script, including the proof of the correctness of all the multipliers and declarations, is approximately 500 lines long, of which about 100 lines are declarations. The proof script can be found in Section C.5. The program itself is straightforward, although the use of a two dimensional array does not show off a functional, interpreted language at its best. The complete verification of a 4 x 4 systolic array of 32 bit multipliers (roughly 110 000 gates) takes just over 10 hours of CPU time on a DEC Alpha 3000 using the testing machine approach, and  Chapter 7. Examples  176  just under three hours using the direct method. This verification uses the testing machine algorithm for STE, showing the weakness of using testing machines. The data structure needed to represent the model of the circuit is approximately 4M in size, making composition of circuit and testing machines difficult. While other implementations of machine composition are possible, the sheer size of the circuits remains an inherent problem. A similar problem can be seen in the verification of the multiplier (Table 7.5). Since both the size of the circuitry and the number of trajectory evaluations is quadratic in the bit-width, if every time trajectory evaluation must be done, circuit composition must be too, the resulting algorithm will be at least quartic. This explains why the verification of large bit widths becomes so expensive for testing machines. The second part of the verification — showing that when connected together the multipliers produce the correct answer — is essentially performing symbolic simulation. Zhu and Seger have shown that given a set of trajectory assertion results, there is a weakest machine which satisfies these assertions [130]; this weakest machine is a conservative approximation of the circuit as any assertion that is true of the approximation is also true of the circuit. This suggests an alternative verification methodology. The verification of the correctness of each of the multipliers extracts the essence of the behaviour of the circuit. From these assertions it should be possible to automatically generate a conservative approximation of the entire systolic array. This representation of this approximation would not use BDDs; in fact it would be at a higher level of abstraction. STE could then be used on the verification of the entire systolic array.  Chapter 7. Examples  111  7.5 Single Pulser This example shows how the fundamental compositional theory introduced in Chapter 5 can be built on; particularly through the use of induction on time, composite, problem-specific inference rules can be developed. 7.5.1 The Problem Johnson has used the Single Pulser — a textbook example circuit — to study different verification methods [88]. The original problem statement for the circuit is: We have a debounced pushbutton, on (true) in the down position, off (false) in the up position. Devise a circuit to sense the depression of the button and assert an output signal for one clock pulse. The system should not allow additional assertions of the output until after the output has released the button. Johnson reformulates this into: • the pulser emits a single unit-time pulse on its output for each pulse received on i, • there is exactly one output pulse for every input pulse, and • the output pulse is in the neighbourhood of the input. Figure 7.10 illustrates the external interface of the pulser. The port I n is the button to be pressed (if it has the value H, the button is pressed, if L then it is not), and Out is the output.  In  Out  Figure 7.10: Single Pulser  Chapter 7. Examples  178  Johnson presents the verification of this circuit in a number of different systems. This section attempts the verification using the compositional theory of STE. This attempt is not as general as some of Johnson's approaches since the specification is very specific about the timing of the output with relation to the button being pressed.  7.5.2  An Example Composite Compositional Rule  The motivation for the lemma below is that the essence of the behaviour of the pulser can be described by three assertions that show how the pulser reacts immediately to stimulation. B y using induction over time, these results can be combined and generalised. Lemma 7.1. Let s,t, and u be arbitrary integers such that 0 < s < t < u. Suppose: 1. |=  {J-.c/i^Next/i!  2. |= (] (-ic/i A  Next 0 i ) = > ( N e x t / i ) |>, and 2  2  3. (= <]0i=D>Next /ii [>; 2  then 1. h 4 G l o b a l [(s,t)] (-.0i)=t>Global[(s + l,t + 1)] hi }. 2.  f= ^ (Global [(s,t)]  (-i0i) A  Global [(t +  1,«)]  g) x  =o ( G l o b a l [(s + l,t + 1)] hi) A ( N e x t  ( i + 2 )  / i ) A ( G l o b a l [(t + 3, u + 2)] / i ) } 2  2  Prao/ The proof of 1 comes straight from Corollary 5.23. For 2, let s, t, and u be arbitrary natural numbers such that s < t < u.  179  Chapter 7. Examples  (1)  |= (| G l o b a l [(s,  t)] (^ )=t> gi  Global  F r o m hypothesis (1) b y L e m m a  (2)  [(s + l,t + 1)] hi }  5.22  (= ^Next*(-«5fi) A N e x t ( * ^ i = > N e x t ( + 1  i + 2  '/i } 2  T i m e - s h i f t i n g hypothesis (2) (3)  |= <] G l o b a l [t+1,  u)] ^ = 0  G l o b a l [(i + 3, u + 2)] /ix [>  F r o m hypothesis 3, b y L e m m a  (4)  |= { ( G l o b a l  =>  [(s, t)] (-.gi)  5.22  A Global  ( G l o b a l [(s + 1, i + 1)] G l o b a l [(t + 3,u + 2)]  [(t + 1, u)] flr,)  A Next(  < + 2  '/i A 2  /» )} 2  C o n j u n c t i o n o f (1), (2), (3)  7.5.3  Application to Single Pulser  G i v e n a candidate c i r c u i t , it s h o u l d be p o s s i b l e to use S T E to v e r i f y the f o l l o w i n g three p r o p erties: 1. |= i ( i [ l n ] ) = > N e x t (-"[Out]) ); 2. \= <| (-.[In] A N e x t [ln])==T> ( N e x t [ 0 u t ] ) ) , and 2  3. |= { [ l n ] = O N e x t ( - i [ 0 u t ] ) |). 2  U s i n g these results, the above l e m m a c a n be i n v o k e d to s h o w that  1. |= ^ G l o b a l [(s,t)] ( - i [ l n ] ) = C > G l o b a l [ ( s + l,t  + 1)] --[Out] [>, a n d  2. |= 4 ( G l o b a l [(s, t)] (--[In]) A G l o b a l [(t+1, u)] [in]) =D> ( G l o b a l [(s + l,t  + 1)] (-"[Out]) A N e x t  G l o b a l [ ( < + 3,u + 2)]  (-.[Out]))}  ( i + 2 )  [Out] A  180  Chapter 7. Examples  The first result says that if the input does not go high (the button is not pushed), then the output does not go high. The second result says when the button is pushed (input goes from low to high), the output goes high for exactly one pulse and then goes low and stays low at least as long as the button is still pushed. I argue that these two properties capture the intuitive specification of Johnson. However, the specification is more restrictive; there are valid implementations that satisfy Johnson's specification which would not pass this specification, showing the limitations of our current methods. It is possible to give a more general specification based on Johnson's SMV specification , but 3  currently there are not efficient model checking algorithms for these specification. 7.6  Evaluation  The experiments reported in this chapter showed that the compositional theory can be successfully implemented in a combined theorem prover-trajectory evaluation system, thereby enabling circuits with extremely large state spaces to be fully verified with reasonable human and computational costs. The following table summarises the examples verified (in the size column, n refers to the bit-width). Description of circuit Simple comparator Hidden weighted bit Carry-save adder B8ZS encoder IEEE floating point multiplier Simple 64-bit multiplier Benchmark 17 multiplier Benchmark 22 systolic array  How verified STE/Compositional Theory STE/Compositional Theory STE STE STE/Compositional Theory STE/Compositional Theory STE/Compositional Theory STE/Compositional Theory  Approx. size (gates) 0(n ) 0{n ) 200 75 33 000 25 000 28 000 115 000 2  2  Note that although the timing constraints in the SMV specification are more general, this SMV specification is also implementation dependent — in particular, it requires some knowledge of the internal structure of the implementation, which this proof does not. 3  Chapter 7. Examples  181  In using the verification system, a key issue is the user interface to the system. Both the STE and the other inference rules are provided in one common, integrated framework. This not only makes it easier for the human verifier to use, but reduces the chance of error. Providing STE as an inference rule for the theorem prover to use proved useful. The ability to use FL as a script language was extremely important for increasingflexibilityand ease of use. The method of data representation proved to be very successful. It allowed BDDs to be used where appropriate, and other representations where BDDs are inappropriate. Decision procedures and other domain knowledge are critical for the success of the approach. The results presented show that the increased expressiveness of TL not only allows a richer set of properties to be expressed, but can make specification cleaner too. This chapter also shows that all three extensions to STE are feasible and can be applied successfully. However, both the testing machines and the mapping method have significant drawbacks in different circumstances. Testing machines are not appropriate to use when the circuit being verified is very large, and when a number of trajectory evaluations will be run requiring different testing machines. Although the cost of automatically constructing testing machines is reasonable, the overhead of performing circuit composition repeatedly can be very large. On the other hand, once the new circuit is constructed, trajectory evaluation is efficient, and therefore the method may be appropriate where only a few trajectory evaluations will be done, and where the consequents are complicated. The mapping method suffers from the need to introduce extra boolean variables. This is particularly the case when wishing to show that a state predicate holds for a sequence of states, where although the individual states are different, the relationship between state components stays constant. For example, we may wish to show that for a sequence of n states, at any time exactly one of m of the state's components have a H value. Using the mapping method would  Chapter 7. Examples  182  require the introduction of nm variables. A different example is the B8ZS verification, where we wish to show that too many zeros do not appear consecutively. The testing machine and direct methods require no new boolean variables; the mapping method would require two new boolean variables for each time step.  Chapter 8  Conclusion  Verification of large circuits is feasible using the appropriate logical framework. Chapters 3, 4 and 5 presented such a framework. Chapters 6 and 7 showed how this theory can be successfully implemented and illustrated the method by verifying a number of circuits. A summary of the researchfindingsis given in Section 8.1, and some issues for future research is given in Section 8.2. 8.1  Summary of Research Findings  8.1.1  Lattice-based Models and the Quaternary logic Q  The motivation of model checking is to use a logic to describe properties of the model of the system under study, and to verify the behaviour of the model by checking whether the properties (written as logic formulas) are satisfied by the model. The key questions are: how the model is represented; which logic is used; and how satisfaction is checked. Using a lattice model structure has significant advantages for automatic model checking. By using a partial order to represent an information ordering, much larger state spaces can be modelled directly than with more traditional representation schemes. Previous work described earlier showed the advantage of this method of model representation. This information ordering has a direct effect on what can be known about the model. A twovalued propositional logic is too crude a tool to use — it must conflate lack of knowledge with falseness. This is not only wrong in principle; the technical properties of a two-valued logic  183  Chapter 8. Conclusion  184  make it impossible to support negation fully. The quaternary logic Q is suitable for describing the state of lattice-based models since it can describe systems with incomplete or inconsistent information. This makes it possible to distinguish clearly between truth and inconsistency, and falseness and incomplete information. Moreover, it supports a much richer temporal logic. On the whole, the use of Q has been very successful. However, there are some minor points which need some attention. As discussed in Chapter 3 the definition of Q given in Table 3.1 on page 49 is not the only one possible. For example, in the definition given here, J_ V T = t. This definition is not without its problems — although it does have the advantage of very efficient implementation, it complicates some of the proofs and, notwithstanding the usual intuitive motivation, seems difficult to justify in the context of a temporal logic. J_ V T = T , would seem to be a better definition. In order to keep monotonicity constraints this would necessitate defining t V T = T too. These redefinitions would mean that disjunction in Q would not be the meet with respect to the truth ordering of Q. Which would be the better definition is not clear; more theoretical and practical work must be done. 8.1.2  The Temporal Logic TL  Q can only describe the instantaneous state of a model. The temporal logic TL uses Q as its base, and can describe the evolving behaviour of the model over time. Note that the choice of Q as the base of the temporal logic leaves much freedom in choosing the temporal operators of a temporal logic, and other temporal logics could be built on top of Q. Previous temporal logics proposed for model checking partially ordered state space could not be as expressive as TL because they were based on a two valued logic. In particular TL supports negation and disjunction fully. In the examples explored in this thesis, the expressiveness of the logic was quite sufficient (the problems encountered with some of the verifications  Chapter 8. Conclusion  185  were caused by limitations in the shortcomings of the model checking algorithms). Nevertheless, whether introducing new temporal operators is worthwhile is an interesting one, especially if the model structure were extended (see Section 8.2.1). 8.1.3  Symbolic Trajectory Evaluation  STE has been used successfully in the past for model checking partially ordered state spaces. However, previous work only supported a restricted temporal logic. The thesis showed that the theory of STE could be generalised to deal with the whole of TL, and a number of practical algorithms were proposed for model checking a significant subset of TL. In particular, the fourvalued logic of Q proved a good technical framework for STE-based algorithms. 8.1.4  Compositional Theory  The increase in expressiveness makes the need to overcome the performance bottlenecks of model checking more alluring and more important computationally. One of the primary contributions of the research is the development of a sound compositional theory for STE-based model checking using TL formulas. A set of sound inference rules can be used to deduce results: the base rule uses STE to verify a property of a model; the other rules can be used to combine properties previously proved. At a practical level, the compositional theory can be used to implement a hybrid verification system that uses both theorem proving and model checking for verification. BDD-based model checking algorithms are extremely effective in proving many properties. However, there are inherent computational limits in what these methods can do; by using a theorem prover which implements the compositional theory, these limits can be overcome to a great extent. By providing automatic assistance, increasing the level of abstraction, and, most importantly, by providing a powerful andflexibleuser interface to the theorem prover (through FL), the task of the  Chapter 8. Conclusion  186  human verifier using the theorem prover can be made easier. Features of this approach are: • An appropriate verification methodology can be applied at the appropriate level — model checking at the low level, theorem proving at a higher level. • STE supports a good model of time. This makes it suitable to verify not only functional correctness, but many timing properties. • In the verification, although the implementation is given at a low level (e.g. at the gate or switch level), the correctness specification (viz. the TL formulas used) is, through the use of data abstraction, at a fairly high level. • User intervention is necessary. Low level verification through STE, and important heuristics in the theorem proving component are important in alleviating the burden the verifier might otherwise encounter. To illustrate the effectiveness of the approach, a number of circuits were completely verified. The largest of these circuits is one of the circuits in the IFIP WG10.5 Benchmark suite and contains over 100 000 gates. A serious timing error was discovered in the verification. This experimental work showed that increasing the expressiveness of the temporal logics that STE supports not only means that more properties can be expressed, but that through the use of the compositional theory, is computationally feasible. 8.2  Future Research  The research has raised a number of research issues, and left some questions only partially answered.  Chapter 8. Conclusion  8.2.1  187  Non-determinism  The lattice structure of the state space means that although the next state function is deterministic, non-determinism can be implicitly represented through the use of X values. Although suitable for dealing with non-deterministic behaviour of inputs of circuit models, this treatment of non-determinism is not very sophisticated. One avenue of research would be to investigate the possibility of incorporating non-determinism explicitly within the model structure by replacing the next state function Y with a next state relation. Whether the semantics would be linear or branching time needs exploration, although I conjecture that a branching time semantics would be more suitable. Trees, rather than sequences or trajectories, would be used to model behaviour (and properties verified using symbolic trajectree evaluation). This would clearly raise the issue of the expressiveness of TL, and the need for operators that express path switching. 8.2.2  Completeness and Model Synthesis  The work of Zhu and Seger [ 130] showed that the compositionality theory for trajectory formulas [78] with minor modification is complete in the following sense. If K is a set of assertions, there is a weakest model M such that each assertion in K holds of the model. Moreover, any assertion that is true of M can be derived from K using the compositional theory. Whether the same thing is true of the compositional theory for TL needs further investigation. This question is important from a practical point of view. Being able to construct such a weakest model from a set of assertions can be very useful for specification validation. It can also be used for verification, as discussed in Section 7.4.4, where a possible verification strategy for the verification of systolic array multiplier was outlined. After proving that the individual base modules of the circuit work correctly, it should be possible to construct a model (the extracted model) of the circuit from the set of assertions proved of the base modules; these assertions extract out the essential behaviour of the circuit. Then, the overall behaviour of the circuit can  Chapter 8. Conclusion  188  be verified by performing STE on the extracted model. This raises the question of how to execute temporal logics efficiently, which involves interesting theoretical and practical questions (see [57] for an introduction). The key in making this efficient is, I conjecture, that the appropriate data structures should be used for representing the extracted model. In particular, given that BDDs are a very good representation of bit-level descriptions of the circuit, it is unlikely that using a BDD representation for the extracted model will gain significant improvement in performance, and for a multiplier circuit, it will certainly fail. Rather, the extracted model should be used as a method for finding a higher-level description of the circuit. For example, in the case of the array multiplier, an integer level description would be suitable. Even using a non-canonical representation of integers would allow STE to be accomplished in this particular case. What is important is that it should be easy to apply domain information to the problem. Note that from a practical point of view, it may not be necessary for the compositional theory to be complete, provided that all, or most, interesting properties can be derived. If the compositional theory is not complete, then the usefulness of this approach must be determined experimentally. 8.2.3  Improving STE Algorithms  Although the STE-based algorithms presented here were shown to be effective, they are not capable of model checking all assertions. There are two major aspects that need research. • Enriching the antecedent So far the STE-based algorithms require that the antecedents be trajectory formulas. A l though the use of the compositional theory ameliorates this restriction, it would be desirable to support richer antecedents. The key question is how sets of sequences can efficiently be represented. Through the introduction of fresh boolean variables it is possible  Chapter 8. Conclusion  189  to represent the union of two sets, thereby increasing the types of formulas for which relatively simple representations exist for their defining sequence sets. How efficiently can this be implemented? Axe there alternative representation techniques? • Supporting the infinite temporal operator At present there are no general algorithms for supporting the until operator and the derived infinite temporal operators. To do this requires not only an efficient way to represent a set of states, but also efficient methods of performing operations such as set union and comparison. STE uses parametric representation of state, which allows extremely large state spaces to be represented. This representation does not yet support efficient set manipulation operations. Thus, an important research question is how these operations can be implemented efficiently. 8.2.4  Other Model Checking Algorithms  This leads on to the question of whether model checking algorithms other than those based on symbolic trajectory evaluation would be effective. It appears that adapting the traditional BDDstyle model checking algorithms such as those described in [26] to deal with partially-ordered state spaces would be possible. The logical framework developed here — Q, TL and the various satisfaction relations — would form the basis of such adaptation. The research question is how these model checking algorithms could be adapted to make use of partial information in an effective way. Particularly if extended to deal with non-determinism, an advantage of these model checking algorithms is that they would support model checking of properties requiring more expressive formulas than those of the style of verification supported by STE.  Chapter 8. Conclusion  8.2.5  190  Tool Development  The prototypes developed in the course of this research have showed that efficient, usable tools can be developed to support the compositional theory. The key components are supporting powerful, easy use of domain knowledge, and the provision of a flexible user interface through FL. Although the prototypes were successful, they were prototypes and contained a number of ad hoc features. Not only is a cleaner implementation required, but there are some issues which need further attention. • Forward or backward style of proof. The prototypes used the forward style of proof, whereas Seger's VossProver used the backward style of proof. While I believe that the forward style of proof is more appropriate for hardware verifications using this approach, the issue is not clear. • Incorporating new domain knowledge. The use of decision procedures and the incorporation of domain knowledge in other ways (e.g. through decision procedures) is important. Standard packages for types such as bit vectors and integers must be provided, and it would be desirable to have a clean way for users to integrate new theories or extend old ones. • Partial automation of theorem proving. Although using STE for much of the verification alleviates much of the tedium traditionally associated with low-level of verification using theorem provers, it is desirable to automate as much as possible. The use of heuristics for finding time-shifts and specialisations needs to be extended. • Debugging facilities. When errors are detected it is important that meaningful error messages be provided. One issue is relating higher-level concepts (e.g. an equation involving integers) to lower-level concepts (e.g. values on bit-valued nodes). Another issue is intelligent intervention when errors occur — determining what information is needed for the  Chapter 8. Conclusion  191  user to correct the proof and presenting it in a meaningful way. This is a general lesson for verification systems [105]. Epilogue  Verification is a central theoretical and practical problem of computer science, and much research is being done on different facets of the problem. Systems with very large state spaces pose a particular challenge for verification, especially when a detailed account of timing is important. For these types of state space, partial order representations can be very effective. The three major contributions of this thesis have been: • Developing a suitable theoretical framework for a temporal logic used to describe the behaviour offinitestate systems with lattice-structured state spaces; • Extending symbolic trajectory evaluation techniques to provide effective model checking for an important class of assertions about these systems; and • Developing and implementing a compositional theory for model checking, which allows the successful integration of theorem proving and automatic model checking approaches in a practical tool that can successfully verify large circuits.  Bibliography  [1] A. Arnold and S. Brlek. Automatic Verification of Properties in Transition Systems. Software — Practice and Experience, 25(6):579-596, June 1995. [2] M . Aagaard and C.-J.H. Seger. The Formal Verification of a Pipelined Double-Precision IEEE Floating-Point Multiplier. In ACM/IEEE International Conference on ComputerAided Design, pages 7-10, November 1995. [3] M . Abadi and L. Lamport. Composing specifications. ACM Transactions on Programming Languages and Systems, 15(1):73—172, January 1993. [4] Advanced Micro Devices. PAL Device Handbook. Advanced Micro Devices, Inc., 1988. [5] H.R. Andersen, C. Stirling, and G. Winskel. A Compositional Proof System for the Modal //-calculus. In Proceedings of the 9th Annual Symposium on Logic in Computer Science, June 1994. [6] A. Aziz, T.R. Shiple, V. Singhal, and A.L. Sangiovanni-Vincentelli. Formula-Dependent Equivalence for Compositional CTL Model Checking. In Dill [48], pages 324-337. [7] R.C. Backhouse. Program Construction and Verification. Prentice-Hall, London, 1986. [8] D.L. Beatty. A Methodology for Formal Hardware Verification with Application to Microprocessors. PhD thesis, Carnegie-Mellon University, School of Computer Science, 1993. [9] I. Beer, S. Ben-David, D. Geist, R. Gewirtzman, and M . Yoeli. Methodology and system for practical formal verification of reactive hardware. In Dill [48], pages 182-193. [10] N.D. Belnap. A useful four-valued logic. In J.M. Dunn and G. Epstein, editors, Modern Uses of Multiple Valued Logic. D. Reidel, Dordrecht, 1977. [11] S.A. Berezine. Model checking in //-calculus for distributed systems. Technical Paper, Department of Mathematics, Novosibirsk State University, 1994. [12] O. Bernholtz and O. Grumberg. Buy One, Get One Free !!! In Gabbay and Ohlbach [61], pages 210-224. [13] M.A. Bezem and J. Groote. A Correctness Proof of a One-bit Sliding Window Protocol in //CRC. The Computer Journal, 37(4):289-307, 1994. 192  Bibliography  193  [14] B. Boigelot and P. Wolper. Symbolic Verification with Periodic Sets. In Dill [48], pages 55-67. [15] T. Bolognesi and E. Brinksma. Introduction to the ISO Specification Language LOTOS. Computer Networks and ISDN Systems, 14:25-59, 1987. [16] S. Bose and A. Fisher. Automatic verification of synchronous circuits using symbolic logic simulation and temporal logic. In Claesen [33], pages 151-158. [17] R.S. Boyer and J.S. Moore. A Computational Logic Handbook. Academic Press, 1988. [18] J.C. Bradfield. A Proof Assistant for Symbolic Model-Checking. In G. von Bochmann and D.K. Probst, editors, CAV '92: Proceedings of the Fourth International Workshop on Computer Aided Verification, Lecture Notes in Computer Science 663, pages 316-329, Berlin, 1992. Springer-Verlag. [19] J.C. Bradfield. Verifying Temporal Properties of Systems. Birkhauser, Boston, 1992. [20] S.D. Brookes, C.A.R. Hoare, and A.W. Roscoe. A Theory of Communicating Sequential Processes. Journal of the Association for Computing Machinery, 31(3):560-599, July 1984. [21] R.E. Bryant. On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Application to Integer Multiplication. IEEE Transactions on Computers, 40(2):205-213, February 1991. [22] R.E. Bryant. Symbolic Boolean Manipulation with Ordered Binary-Decision Diagrams. ACM Computing Surveys, 24(3):293-318, September 1992. [23] R.E. Bryant, D.L. Beatty, and C.-J. H. Seger. Formal Hardware Verification by Symbolic Ternary Trajectory Evaluation. In Proceedings of the 28th ACM/IEEE Design Automation Conference, pages 397^107. 1991. [24] R.E. Bryant and Y.-A. Chen. Verification of Arithmetic Functions with Binary Moment Diagrams. Technical Report CMU-CS-94-160, School of Computer Science, Carnegie Mellon University, May 1994. [25] R.E. Bryant and C.-J. H. Seger. Formal Verification of Digital Circuits by Symbolic Ternary System Models. In E.M Clarke and R.P Kurshan, editors, Proceedings of Computer-Aided Verification '90, pages 121-146. American Mathematical Society, 1991. [26] J.R Burch, E.M. Clarke, D.E. Long, K.L. McMillan, and D.L. Dill. Symbolic Model Checking for Sequential Circuit Verification. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 13(4):401-424, April 1994.  194  Bibliography  [27] J.R. Burch, E.M. Clarke, and K.L. McMillan. Symbolic Model Checking: 10° States and Beyond. Information and Computation, 98(2): 142-170, June 1992. 2  [28] J.R. Burch and D.L. Dill. Automatic Verification of Pipelined Microprocessor Control. In Dill [48], pages 68-80. [29] O. BurkartandB. Steffen. Model Checking for Context-Free Processes. In W.R. Cleaveland, editor, CONCUR '92: Proceedings of the Third International Conference on Concurrency Theory, Lecture Notes in Computer Science 630, pages 123-137, Berlin, 1992. Springer-Verlag. [30] C.A.J, van Eijk and G.L.J.M. Janssen. Exploiting Structural Similarities in a BDD-based Verification Method. In Kumar and Kropf [92], pages 110-125. [31] L. Carroll. Through the Looking Glass and what Alice saw there. Macmillan and Co., London, 1894. [32] S. Christensen, H. Huttel, and C. Stirling. Bisimulation Equivalence is Decidable for all Context-Free Processes. Technical Report ECS-LFCS-92-218, Laboratory for Foundations of Computer Science, Department of Computer Science, University of Edinburgh, June 1992. [33] L.J.M. Claesen, editor. Proceedings of the IFIP WG 10.2/WG 10.5 International Workshop on Applied Formal Methods for Correct VLSI Design (November 1989), Amsterdam, 1990. North-Holland. [34] E. Clarke, M . Fujita, and X. Zhao. Hybrid Decision Diagrams: Overcoming the Limitations of MTBDDs and BMDs. Technical Report CMU-CS-95-159, School of Computer Science, Carnegie Mellon University, April 1995. [35] E. Clarke and X. Zhao. Word Level Symbolic Model Checking: A New Approach for Verifying Arithmetic Circuits. Technical Report CMU-CS-95-161, School of Computer Science, Carnegie Mellon University, May 1995. [36] E.M. Clarke, E.A. Emerson, and A.P. Sistla. Automatic Verification of Finite-State Concurrent Systems Using Temporal Logic Specifications. ACM Transactions on Programming Languages and Systems, 8(2):244—263, April 1986. [37] E.M. Clarke, T. Filkorn, and S. Jha. Exploiting symmetry in temporal logic model checking. In Courcoubetis [45], pages 450-462. [38] E.M. Clarke, O. Grumberg, and K. Hamaguchi. Another Look at LTL Model Checking. In Dill [48], pages 415-427.  Bibliography  195  [39] E.M. Clarke, O. Grumberg, and D.E. Long. Model Checking and Abstraction. ACM Transactions on Programming Languages and Systems, 16(5), September 1994. [40] E.M. Clarke, D.E. Long, and K.L. McMillan. Compositional Model Checking. In IEEE Fourth Annual Symposium on Logic in Computer Science, Washington, D.C., 1989. IEEE Computer Society. [41] R. Cleaveland, J. Parrow, and B. Steffen. The Concurrency Workbench: A SemanticsBased Tool for the Verification of Concurrent Systems. ACM Transactions on Programming Languages and Systems, 15(1):36—72, January 1993. [42] A. Cohn. The Notion of Proof in Hardware Verification. Journal ofAutomated Reasoning, 5(2): 127-139, June 1989. [43] O. Coudert, C. Berthet, and J.C. Madre. Verification of Sequential Machines Using Boolean Functional Vectors. InClaesen [33], pages 179-195. [44] O. Coudert and J.C. Madre. The Implicit Set Paradigm: A New Approach to Finite State System Verification. Formal Methods in System Design, 6(2): 133-145, March 1995. [45] C. Courcoubetis, editor. Proceedings of the 5th International Conference on ComputerAided Verification, Lecture Notes in Computer Science 697, Berlin, July 1993. SpringerVerlag. [46] B. Cousin and J. Helary. Performance Improvements of State Space Exploration by Regular and Differential Hashing Functions. In Dill [48], pages 364-376. [47] M. Darwish. Formal Verification of a 32-Bit Pipelined RISC Processor. MASc Thesis, University of British Columbia, Department of Electrical Engineering, 1994. [48] D.I. Dill, editor. CAV '94: Proceedings of the Sixth International Conference on Computer Aided Verification, Lecture Notes in Computer Science 818, Berlin, June 1994. Springer- Verlag. [49] J. Dingel and T. Filkorn. Model checking for infinite state systems using data abstraction, assumption-commitment style reasoning and theorem proving. In Wolper [128], pages 55-69. [50] M . Donat. Verification Using Abstract Domains. Unpublished paper, Department of Computer Science, University of British Columbia, April 1993. [51] A. Conan Doyle. The Adventures of Sherlock Holmes. Oxford University Press, Oxford, 1993.  Bibliography  196  [52] E.A. Emerson. Temporal and Modal Logic. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B (Formal Models and Semantics), chapter 16, pages 995-1072. Elsevier, Amsterdam, 1990. [53] E.A. Emerson and J.Y. Halpern. "Sometimes" and "Not Never" Revisited. Journal of the Association for Computing Machinery, 33(1): 151-178, January 1986. [54] R. Enders, T. Filkorn, and D. Taubner. Generating BDDs for Symbolic Model Checking in CCS. In K.G. Larsen and A. Skou, editors, CAV '91: Proceedings of the Third International Conference on Computer Aided Verification, Lecture Notes in Computer Science 571, pages 203-213, Berlin, June 1991. Springer-Verlag. [55] J. Esparza. On the decidability of model checking for several /^-calculi and Petri nets. Technical Report ECS-LFCS-93-274, Laboratory for Foundations of Computer Science, Department of Computer Science, University of Edinburgh, July 1993. [56] J.C. Fernandez, A. Kerbrat, and L. Mounier. Symbolic Equivalence Checking. In Courcoubetis [45], pages 85-96. [57] M . Fisher and R. Owens. An Introduction to Executable Modal and Temporal Logics. In M . Fisher and R. Owens, editors, IJCAP 93 Workshop: Executable Modal and Temporal Logics, Lecture Notes in Artificial Intelligence 897, pages 1-39, Berlin, Aug 1993. [58] M . Fitting. Bilattices and the Theory of Truth. 18(3):225-256, August 1989.  Journal of Philosophical Logic,  [59] M . Fitting. Bilattices and the Semantics of Logic Programming. The Journal of Logic Programming, 11(2):91—116, August 1991. [60] K. Frenkel. An Interview with Robin Milner. Communications of the ACM, 36(1 ):9097, January 1993. [61] D.M. Gabbay and H.J. Ohlbach, editors. ICTU 94: Proceedings of the First International Conference on Temporal Logic, Lecture Notes in Artificial Intelligence 827, Berlin, Aug 1994. [62] A. Galton, editor. Temporal Logics and their Applications. Academic Press, London, 1987. [63] M.R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the Theory of NF'-Completeness. W.H. Freeman and Company, New York, 1979. [64] Globe and Mail. Intel finds 'subtle flaw' in chip. The Globe and Mail, 25 November 1994, p. B16, November 1994.  197  Bibliography  [65] Globe and Mail. Intel takes charge. The Globe and Mail, 18 January 1995, p. B2, February 1995. [66] D.Goldberg. Computer arithmetic. In Computer Architecture: a quantitative approach by J.L. Hennessy and D.A. Patterson, chapter Appendix A, pages A0-A65. Morgan Kaufmann, San Mateo, California, 1990. [67] M. J.C. Gordon. Programming Language Theory and its Implementation. Prentice-Hall, London, 1988. [68] M.J.C. Gordon, editor. Introduction to HOL: a theorem proving environment for higher order logic. Cambridge University Press, Cambridge, 1993. [69] M.J.C. Gordon, CP. Wadsworth, and R. Milner. Edinburgh LCF : a mechanised logic of computation. Lecture Notes in Computer Science 78. Springer-Verlag, Berlin, 1979. [70] S. Graf and C. Loiseaux. A tool for symbolic program verification and abstraction. In Courcoubetis [45], pages 71-84. [71] O. Grumberg and R.P. Kurhsan. How Linear Can Branching-time Be? In Gabbay and Ohlbach [61], pages 180-194. [72] O. Grumberg and D.E. Long. Model Checking and Modular Verification. ACM Transactions on Programming Languages and Systems, 16(3):843-871, May 1994. [73] A. Gupta. Formal hardware verification methods: A survey. Formal Methods in System Design, 1(2/3): 151-238, October 1992. [74] N.A. Harman and J.V Tucker. Algebraic models and the correctness of microprocessors. In Milne and Pierre [101], pages 92-108. [75] J. Harrison. Binary decision diagrams as a HOL derived rule. The Computer Journal, 38(2): 162-170, 1995. [76] S. Hazelhurst and C.-J. H. Seger. A Simple Theorem Prover Based on Symbolic Trajectory Evaluation and OBDDs. Technical Report 93-41, Department of Computer Science, University of British Columbia, November 1993. Available by anonymous ftp as ftp://ftp.cs.ubc.ca/pub/local/techreports/1993/TR 93-41.ps.gz. :  [77] S. Hazelhurst and C.-J. H. Seger. Composing symbolic trajectory evaluation results. In Dill [48], pages 273-285. [78] S. Hazelhurst and C.-J.H. Seger. A Simple Theorem Prover Based on Symbolic Trajectory Evaluation and BDD's. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 14(4):413^122, April 1995.  Bibliography  198  [79] P. Heath. The Philosopher's Alice. Academy Editions, London, 1974. [80] M . Hennessy. Algebraic Theory of Processes. MIT Press, Cambridge, MA., 1988. [81] M . Hennessy and R. Milner. Algebraic Laws for Non-determinism and Concurrency. Journal of the Association for Computing Machinery, 32(1): 137-161, January 1985. [82] C.A.R. Hoare. An Axiomatic Basis for Computer Programming. Communications of the ACM, 12(10):576-580, 583, October 1969. [83] R. Hojati and R.K. Brayton. Automatic Datapath Abstraction in Hardware Systems. In Wolper [128], pages 98-113. [84] H. Hungar. Combining Model Checking and Theorem Proving to Verify Parallel Processes. In Courcoubetis [45], pages 154-165. [85] H. Hungar and B. Steffen. Local Model Checking for Context Free Processes. In A. Lingas, R. Karlsson, and S. Carlsson, editors, Proceedings of the 20th International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science 700, pages 593-605, Berlin, July 1993. Springer-Verlag. [86] W.A.Hunt. FM8501: A Verified Microprocessor. Lecture Notes in Artificial Intelligence 795. Springer-Verlag, Berlin, 1994. [87] P. Jain and G. Gopalakrishnan. Efficient symbolic simulation-based verification using the parametric form of boolean expressions. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 13(8): 1005-1015, August 1994. [88] S.D. Johnson, P.S. Miner, and A. Camilleri. Studies of the Single Pulser in Various Reasoning Systems. In Kumar and Kropf [92], pages 126-145. [89] B. Jonsson. Compositional specification and verification of distributed systems. ACM Transactions on Programming Languages and Systems, 16(2):259-303, March 1994. [90] J.J. Joyce and C.-J.H. Seger. Linking BDD-based Symbolic Evaluation to Interactive Theorem-Proving. In Proceedings of the 30th Design Automation Conference. IEEE Computer Society Press, June 1993. [91] T. Kropf. Benchmark-Circuits for Hardware-Verification. In Kumar and Kropf [92], pages 1-12. [92] R. Kumar and T. Kropf, editors. TPCD'94: Proceedings of the Second International Conference on Theorem Provers in Circuit Design, Lecture Notes in Computer Science 901, Berlin, September 1994. Springer-Verlag.  Bibliography  199  [93] R.P Kurshan and L. Lamport. Verification of a Multiplier: 64 Bits and Beyond. In Courcoubetis [45], pages 166-179. [94] L. Lamport. The Temporal Logic of Actions. ACM Transactions on Programming Languages and Systems, 16(3), May 1994. [95] D.E. Long. Model Checking, Abstraction, and Compositional Verification. PhD thesis, Carnegie-Mellon University, School of Computer Science, July 1993. Technical report CMU-CS-93-178. [96] K. Marzullo, KB. Schneider, and J. Dehn. Refinement for Fault-Tolerance: An Aircraft Hand-off Protocol. Technical Report 94-1417, Department of Computer Science, Cornell University, April 1994. [97] M.C. McFarland. Formal Verification of Sequential Hardware: A Tutorial. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 12(5):633-654, May 1993. [98] K.L. McMillan. Symbolic Model Checking: An Approach to the State Explosion Problem. PhD thesis, Carnegie-Mellon University, School of Computer Science, 1993. [99] K.L. McMillan. Hierarchical representations of discrete functions, with applications to model checking. In Dill [48], pages 41-54. [100] C. Mead and L. Conway. Introduction to VLSI Design. Addison-Wesley, Reading, Massachusetts, 1980. [101] G.J. Milne and L. Pierre, editors. CHARME '93: IFIP WG10.2 Advanced Research Working Conference on Correct Hardware Design and Verification Methods, Lecture Notes in Computer Science 683, Berlin, May 1993. Springer-Verlag. [102] R. Milner. 1989.  Communication and Concurrency. Prentice-Hall International, London,  [103] F. Moller and S.A. Smolka. On the Computational Complexity of Bisimulation. ACM Computing Surveys, 27(2):287-289, June 1995. [104] New Scientist. Flawed chips bug angry users. New Scientist, (1955): 18, December 1994. [105] S. Owre, J. Rushby, N. Shankar, andF. vonHenke. Formal Verification for Fault-Tolerant Architectures: Prolegomena to the Design of PVS. IEEE Transactions on Software Engineering, 21(2): 107-125, February 1995.  Bibliography  200  [106] S. Owre, J.M. Rushby, and N. Shankar. PVS: A Prototype Verification System. In D. Kapur, editor, Proceedings of the 11th International Conference on Automated Deduction — CADE-11, Lecture Notes in Computer Science 607, pages 748-752, Berlin, March 1992. Springer-Verlag. [107] L.C.Paulson. MLforthe working programmer. Cambridge University Press, New York, 1991. [108] L. Pierre. VHDL Description and Formal Verification of Systolic Multipliers. In D. Agnew, L. Claesen, and R. Camposano, editors, Computer hardware description languages and their applications : proceedings of the 11th IFIP WG10.2 International Conference on Computer Hardware Description Languages and their Applications - CHDL'93, number 32 in IFIP Transactions A, pages 225-242, Berlin, 1993. Springer-Verlag. [109] L. Pierre. An automatic generalization method for the inductive proof of replicated and parallel structures. In Kumar and Kropf [92], pages 72-91. [110] D.Price. Pentium FDIV Flaw - lessons learned. IEEE Micro, 15(12):88-86, April 1995. [Ill] S. Rajan, N. Shankar, and M.K. Srivas. An Integration of Model Checking with Automated Proof Checking. In Wolper [128], pages 84-97. [112] J.M. Rushby and F. von Henke. Formal Verification Algorithms for Critical Systems. IEEE Transactions on Software Engineering, 19(1), January 1993. [113] M. Ryan and M . Sadler. Valuation Systems and Consequence Relations. InS. Abramsky, D.M. Gabbay, and T.S. Maibaum, editors, Handbook of Logic in Computer Science, volume 1 (Background: Mathematical Structures), chapter 1, pages 1-78. Clarendon Press, Oxford, 1992. [114] T. Ralston S. Gerhart, D. Craigen. Experience with Formal Methods in Critical Systems. IEEE Software, ll(l):21-28, June 1994. [115] C.-J.H. Seger. Voss — A Formal Hardware Verification System User's Guide. Technical Report 93-45, Department of Computer Science, University of British Columbia, November 1993. Available by anonymous ftp as ftp://ftp.cs.ubc.ca/pub/local/techreports/1993/TR-93-45.ps.gz. [116] C.-J.H. Seger and R.E. Bryant. Formal Verification by Symbolic Evaluation of PartiallyOrdered Trajectories. Formal Methods in Systems Design, 6:147-189, March 1995. [117] C.-J.H. Seger and JJ. Joyce. A Mathematically Precise Two-Level Hardware Verification Methodology. Technical Report 92-34, Department of Computer Science, University of British Columbia, December 1992.  201  Bibliography  [118] H. Simonis. Formal verification of multipliers. In Claesen [33], pages 267-286. [119] V. Stavridou. Formal methods and VLSI engineering practice. The Computer Journal, 37(2), 1994. [120] C. Stirling. Modal and temporal logics. In S. Abramsky, D.M. Gabbay, and T.S. Maibaum, editors, Handbook of Logic in Computer Science, volume 2 (Background: Computational Structures), pages 477-563. Clarendon Press, Oxford, 1992. [121] C.Stirling. Modal and Temporal Logics for Processes. Technical Report ECS-LFCS92-221, Laboratory for Foundations of Computer Science, Department of Computer Science, University of Edinburgh, June 1992. [122] C. Stirling and D. Walker. Local model checking in the modal mu-calculus. Theoretical Computer Science, 89(1): 161-177, October 1991. [123] P A . Subrahmanyam. Towards Verifying Large(r) Systems: A strategy and an experiment. In Milne and Pierre [101], pages 135-154. [124] A.M. Turing. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society (Second Series), 42:230-265, 1937. [125] A. Visser. Four Valued Semantics and the Liar. 13(2):181-212,May 1984.  Journal of Philosophical Logic,  [126] O. Wilde. The Importance of being Earnest: a trivial comedy for serious people. Vanguard Press, New York, 1987. [127] G. Winskel. A note on model checking the modal z/-calculus. In G. Ausiello, M . DezaniCiancagnlini, and S. Ronchi Delia Rocca, editors, Proceedings of the 16th International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science 372, pages 761-771, Berlin, 1989. Springer-Verlag. [128] P. Wolper, editor. CAV '95: Proceedings of the Seventh International Conference on Computer Aided Verification, Lecture Notes in Computer Science 939, Berlin, July 1995. Springer-Verlag. [129] Z. Zhu. A Compositional Circuit Model and Verification by Composition. In Kumar and Kropf [92], pages 92-109. [130] Z. Zhu and C.-J.H. Seger. The Completeness of a Hardware Inference System. In Dill [48], pages 286-298.  Appendix A  Proofs  A.l  Proof of Properties of TL  A.1.1 Proof of Lemma 3.3 Lemma A . l (Lemma 3.3). If g, h: S —>• Q are simple, then D(g) = D(h) implies that V s £ S,g(s) = h(s). Proof. To emphasise that D(g) = D(h), we set D = D(g). Let s £ 5. Let E = {q € Q : (s , q) £ D A s C 5} and let e = UE. The proof first shows that p(s) = e. q  q  (0)5(5) Z< e (1)  3s  £ S 9 (s ,g(s))  (2)  5  (3)  <7(s)£F,  p  P  p  C  £ £>  0 is simple. Definition of defining pair.  s  Definition of £ , (1), (2).  (4) flf( ) ^ U £  Definition of join.  5  (b)e^g(s) (1)  V K  )  ?  ) € D ,  g  ^ W  (2) 0(5) is an upperbound of E (3) UE^g(s) Thus 0(5) = e.  q = g(s ),s C 5, 0 is monotone. q  q  (1) Property of join.  Similarly, /i(s) = e. Therefore, g(s) = h(s). As s was arbitrary V s £ S,g(s) = h(s)  •  Note that the proof does not rely on the particular structure of Q; it only relies on Q being 202  203  Appendix A. Proofs  a complete lattice. A.1.2  Proof of Theorem 3.5  The idea behind the proof is to partition the domain of an arbitrary p : S —>• Q depending on the value of p(s). Then, we construct a function which enables us to determine which partition an element falls in. We can now use this information in reverse — once we know which partition an element falls into, we can return the value of the function for that element. The complication of the proof is to use the properties of Q to combine all this information together. As an analogy suppose that g : S —> {—10,10}. Suppose we know that g~(s) = 1 if g(s) = —10 and g~(s) = 0 otherwise; and g+(s) = 1 if g(s) = 10 and g+(s) = 0 otherwise. Then we can write g(s) = — lOg^(s) + 10g (s). The two steps in doing this were to find the +  and g  +  functions, and then to determine how to combine them. The proof of Theorem 3.5 follows a similar pattern: first the functions that are the equivalent of g- and g are given; after that it +  is shown that these functions can be combined to simulate p, and that they can be constructed from simple predicates. The functions given in the next definition are the analogues to the g_ and g functions. +  Definition A . l .  Suppose that we have an arbitrary monotonic function p : S —>• Q. Define the following: (T  3S e S,s C u,p(s) = f  t  otherwise.  t J_  3s G <S, s C u,p(s) = t otherwise.  T t  3s G <5, s C u, p(s) = otherwise.  T  •  The proof of Theorem 3.5 comes in two parts: first, Lemma A.2 demonstrates how to combine  Appendix A. Proofs  204  the Xq functions to construct a function p' that is equivalent to p; and Theorem A.3 concludes by showing that the functions Xf» Xt and X T can be defined from the simple predicates using TL operators. Lemma A.2.  Let p : S -» Q be a monotonic predicate. Define p'(u) by P ' ( « ) = Xt(«) A x f ( w ) A X T ( « ) V ->XT(U)-  Then, Vs € <S,p( ) = 5  (a) Suppose p(u) =J_ (1) Xf(«) = XT(W) = t,xt(u) =_L  By definition and monotonicity of p.  (2) p'(u) = t A 1 A t V f = J_= p(u)  (1)  Suppose p(u) = f (1) Xf( ) u  =  T , Xt( ) — -U XT(«) = t u  (2) j / ( u ) = l A T A t V f = f = p(u)  By definition and monotonicity of p. (1)  (c) Suppose p(u) = t  (1) Xt{u) = t ,  Xf («) = X T ( « ) = t.  (2) p'(u) = t A t A t V f = t = p(u)  By definition and monotonicity of p.  (1)  205  Appendix A. Proofs  (d) Suppose p(T) = T (1)  XT{U)  =  T, _L  7±Xt(u),  By definition and monotonicity of p.  t=<Xf(«) (2)  p'(u) y± At A T V T  (1) and monotonicity of A and V.  = f V T = T = P(u)  (3) p'(u) = p(u).  Since T = UQ.  • The final part of the proof is to show that the Xq functions can be constructed from the simple predicates. Theorem A.3 (Theorem 3.5).  For all monotonic predicates p : S —> Q, 3p' € TL such that Vs £ <S, p(s) = p'(s). Proof. Partition S according to the value of p: S = {s G S : p(s) =1}  5 = {s € 5 : p(s) = f}  5 = { s € 5 : p(s) = t}  • 5 = { s e 5 : p(s) = T } .  ±  f  t  T  Some of these sets may be empty. Now, for each s £ S we define x' '• $ ~> Q (each x' chars  s  acterises all elements at least as big as s) as follows:  {  t  s rz*  Note that each x' is simple. For the purpose of this lemma, define V 0 =1. The x' have the s  following two properties.  206  Appendix A. Proofs  Suppose 3s G S  9  s fZ u,p(s) = g/or s o m e  g  G Q.  Definition of x'-  (1)  ^( ) = t  (2)  V {x' ( ) v e S } = t  u  s £ Sq by supposition.  u  v  q  Suppose /Js G <S" 9 s fZ i/,p(s) = q for some q €. Q. (1) (2)  Vv £ S ,v g.u  Supposition.  q  V  {x«() ^ u  :u  ^9}  =  -  Either 5 is empty or follows from  L  g  (1).  Now, define: f{u) = -.( V { * ' » : s G Sf}) V T  X  Xt(«) = V {x'» : s G St}  XT(«) =  - ( V { X »  :  seS }) T  V  T  Using the properties of x' proved above, we have that: _T  3s G <S,3 [Z u,p(s) = f  Xf(«)  Xt(«)  I  t  otherwise.  t  3s G S, s fZ u,p(s) = t •  J_  otherwise.  T  3s G <S", s rz u, p(s) = T  t  otherwise.  XT(«) = j  Note that we have constructed from simple predicates the functions Xs given in Definition A. 1. Thus, by Lemma A.2, given an arbitrary monotonic predicate p, we are able to define it from simple predicates using conjunction, disjunction and negation - showing we can consider any monotonic state predicate as a short-hand for a formula of TL.  •  Appendix A. Proofs  207  A.1.3 Proof of Lemma 3.6 Lemma A.4 (Lemma 3.6). 1. Commutativity: 01  A 02 = 02 A C / i , 0 l V 0 = 0 V 0 i . 2  2  2. Associativity: (gi  V 0 ) V 03 = 0i V (02 V 0 ), 2  3  (01  A 0 ) A 03 = 0i A(0 A 0 )  =  -"(-"01 A - " f l T ) .  2  2  3  3. De Morgan's Law: 0iA0  = -"(-"01 V - i f l b ) ,  2  gi  V 02  2  4. Distributivity of A and V : hA( vg ) gi  = (hA ) V (/iA0 ), hv( Ag )  2  9l  2  5. Distributivity o / N e x t  gi  2  = (h V ) A(h V g ) 9l  2  :  N e x t ( 0 i A 0 ) = ( N e x t 0i) A ( N e x t 02), N e x t (01 V 0 ) 2  2  = ( N e x t 0 ) V ( N e x t 0 ). X  2  6. Identity: 0 V C = 0, 0 A C f  = 0.  t  7. Double negation: - , _ ,  0 = 0  Proo/ The proofs all rely on the application of the definition of satisfaction and the properties of Q. Let a € S be given. w  1. Follows from the commutativity of Q. 2. Follows from the associativity of Q. 3. Sat(a, 01 A0 ) = Sat(a, gi) A SanV,g ) 2  2  = ->(->Sat((T,gi) V -^Sat(a,g )) = -<Sat{<T,->g V ->0 ) 2  = 5ar((7,-"(-"0i V ->0 )). 2  Similarly, 0! V 02 = ->(-"0i A->0 )2  1  2  Appendix A. Proofs  208  4. Follows from the distributivity of Q. 5. Sat(cr,Next(g Ag )) = Sat(a>i,g A g ) 1  2  x  2  - Sat(o->i,gi) !\Sat(a>i,g ). 2  = Sat(cr, N e x t g\)  A Sat(a, Nextg 2 ).  = Sat(a, ( N e x t g ) A ( N e x t g )). t  2  The proof for disjunction is similar. 6. Follows since t is the identity for A with respect to Q and f is the identity for V with respect to Q. 7. Sat(a,->-ig) = -^Sat(a,->g) = ->^Sat(cr,g) — Sat(a,g)  •  Since a is arbitrary the result follows. A.1.4  Proof of Lemma 3.7  Lemma A.5 (Lemma 3.7).  If p is a simple predicate over C , then there is a predicate g £ TL„ such that p = g . n  v  p  Proof. Consider (s ,q) G D(p), and suppose that p(u) = q. Then, since p is simple, for all i = g  1,... ,n, s,[j]Cw[!]. What we will do is construct functions that enable us to check whether for alii = 1,... , n, s [i] C u [i]. This will be enough of a building block to complete the proof. q  We define the functions x'  s o q  that x' (h v) = t if s [i] C v[i] and x' (h ) =-L if q[i] % V\v  q  q  q  Formally, the x'fi are defined as: J_ V ->[i] when s [i] = L. q  x' (hv) = < j_ v [t] when s [i] = H. q  q  J_ V (->[i] A[i]) when s [i] = Z. q  s  V  Appendix A. Proofs  209  Informally, this means that x[q, v) indicates whether v is greater than the z-th component of p's defining value for q. Extending this, we get that A" (x' («, v)) returns t if s rz v and returns =1  g  q  J_ otherwise. Extend this further by defining: Xf(s) = AMXfihs))) V T , if 3(a,, f) G D(p) = _L,  otherwise.  Xt{s) = i=\ A (x'tih ))i  if3{s ,t)eD{p)  s  =_L,  t  otherwise.  XTW = -(.A(XT(M)))VT, =J_,  if3( ,f)ei>(p) ST  otherwise.  Considerxt- Suppose 3s such mats fZ uandp(s) = t. Since pis simple, s E -s- By transitivity t  s E u . Hence by the remarks above, xt(v) = t. On the other hand, if /3s such that s (Z u and t  p(s) = t, then either t is not in the range of p or s £ u. In either case Xt(u) =-L. t  For the case of q = f, T , suppose 3s such that s rz v and p(s) = g. Since p is simple, s C s. 7  By transitivity s FZ u . Hence by the remarks above, Xq( ) = T . On the other hand, if /3s such v  q  that s rz u and p(s) = g, then either q is not in the range of p or s  u . In either case Xq( ) = t. v  9  This implies that the definitions given of Xq here are equivalent to those of Definition A . l , and thus we can apply Lemma A.2. As the x are constructed here as formulas of T L , the proof q  is complete. A.2  Proofs of Properties of STE  This section contains proofs of theorems and lemmas stated in Chapter 4. Proof of Lemma 4.3 First, an auxiliary result.  n  •  Appendix A. Proofs  210  Lemma A.6. If g G TL,cjr = f,t,<y G A%),theng r< Sat(8,g). Proof. Let g G TL, 5 =  . . . G A (g). Proof by structural induction. ?  s sis 0  2  If g is simple (base case of induction): (1)  Either (s , ?) G £>(#) or (s , T ) G D(c/)  (2)  Saf(5,0) G {9, T }  Definition of Sat.  (3)  9 X 5ar((5, g)  Definition of < .  0  Definition of A  0  9  Letg = gi A g . 2  (1)  By definition.  Sat(S,g) = Sat(8, ) A Sat(8,g ) gi  2  Suppose q = t, i.e. c> G A ^ g ) . (2a)  38  1  (3a)  G A'(flfi), cT G 2  A"(g ) 9 £ = 2  U  Construction of A  8  2  Inductive assumption.  qdiSat(8 ,g ),q^Sat(S ,g2) 1  i  l  (4a)  g  (5a)  c7^5ar(5,0)  Monotonicity.  X Sar(c>,£1), <? ^ Saf(<S,g) 2  (1), (4a), monotonicity of A.  Suppose q = f, i.e. 5 G A (0). f  (2b)  Either (or both) 8 G A % i ) or 8 G A»($r ) 2  Construction of A Suppose (without loss of generality) that 8 G A (#i). 9  Inductive assumption.  (3b)  f^fl^, i)  (4b)  Trivially,!  (5b)  f A l ^ Sat(8\ )  (6b)  But f A -L= f which concludes the proof.  5  ^Sat(8 ,g ) 2  2  gi  A Sat(8 ,g ) = Sat(8,g). 2  2  (3b), (4b)  Appendix A. Proofs  Letg = ->0i. gi  (2) (3) (4)  Construction of A.  SeA-i( )  (1)  Inductive assumption.  ^q±Sat{8, ) gi  Sat(8,g) = -iSat(6,gi).  -iq±^Sat(6,g)  Lemma 3.1(2).  Hence q ^ Sat(8, g).  Let g = Next g\. Construction of A.  . . . e A"( )  (1)  s = Xands  (2)  q ^ Sat(siS . . . ,gi)  Inductive assumption.  (3)  q ^ Sar(XsxS . . . ,Next5f!)  From (2), definition of Sat.  (4)  0  l 5 2  gi  2  2  Monotonicity of Sat.  q±Sat(8,g)  Suppose g = gi Untilg . 2  By definition, Saf(cr,flri Until02) = W^l (Sat(a> ,gi) A . . . A Sat(a>i-!,gi) A Sat(a>i,g 0  0  2  Let S G A (g) be given. ?  Suppose q — t, i.e. 8 G A*(<7i Untilg ) 2  (1)  3i 9 5 G A ^ N e x t V ) II . . . II A^Next^'" )^) II A^Next^) 1  Construction of A (2)  Vj = 0,... , i, 38 G A^Next^i) such that U{S : j = 0,... , i} = 8 j  j  Definition of II. (3)  SiQ 8, j = 0,... , i  (2), property of join  212  Appendix A. Proofs  (4)  t ^ Sat{8\ Next- ^), j = 0,... , i — 1  (5)  t •< Sat(8, Next^i)  (6)  Similarly t ^ Sar(£, Next'# )  (7)  t X 5a<cT, (NextV) A . . . A (Next^ )^) A (Next^ ))  Inductive assumption.  7  (4), monotonicity. 2  1  2  (5) and (6). (8)  t ^ Sat(S, gi Until g )  Definition of Sat.  2  Suppose q — f, i.e. 8 G A (^! Untilg ). f  2  (1)  Vi = 0,... , 3<? with 8 C 5 and {  <£*' G A ( N e x t V ) U .. . U A^Next^- '^) U A (Next^ ) f  1  f  2  Construction of A. (2)  Vi = 0,... ,8 G A (Next gi A . . . A Next( 'gi) A Next g i  f  0  i-1  J  2  Definition of A (3)  f X 5ar(cT ', Next°c/i A . . . A Next( '#i A Next # )  (4)  f <Sat( 8, gi Unt i 1 g )  8  i_1  f  Inductive assumption.  J  2  Definition of gi Unt i 1 g .  2  2  Lemma A.7 (Lemma 4.3). Let g G TL, and let a G S . For q = t,f,q< Sat(a, g) iff 38 G A (fir) with 8 C cr. w  g  9  9  Proo/ (=^) Assume that g X Sa^a, g). The proof is by structural induction. Suppose g is simple (base case of induction). (1)  q ^ g{o- ) and 3c/ G {<?, T } with (s /, q') G £>(#) and v C cr 0  g  <7 is simple. (2)  s , , X X . . . Ecr  (3)  But s ,XX... q  From(l).  G A%)  Suppose g = gihg . 2  Definition of A (g). q  0  Appendix A. Proofs  213  Suppose q = t. (1)  t •< Sat(a, gi), t -< Sat(a,g )  (2)  3S G A ' ( » , 6 e A ^ ) with  (3)  Let cT = S U cT  (4)  8C  (5)  Lemma 3.2(2).  2  1  2  1  4  8\8 Qa 2  Inductive assumption.  2  From (2). Definition of A(g)  JeA'fo)  Suppose q = f. (1) Either (or both) f ^ SanV, #i) or f ^ Sar(cr, 52)  Lemma 3.2(3)  Without loss of generality assume f z< Sat(a, gi). (2) (3)  38 e A ( ) with S Co1  ^ A  f  f  1  gi  Inductive assumption. Definition of A (g).  ( j )  Suppose g = ->g\. (1)  cjrE-«&tf(0-,flri)  Definition of Sat  (2)  ~-g^5flf(cr,c;i)  Lemma 3.1.  (3)  3<JG A ^ ( f l r i ) 9<JCcr  Inductive assumption.  (4)  <5eA%)  Definition of A(g).  Suppose g = Next gi. (1) (2)  Definition of Sat.  qdSat(a>!,gi) aaeA'teOBJCaM  Inductive assumption.  (3)  XcTGA%)  Construction of A(g).  (4)  X£Ccr  X C cr  Suppose g = gi U n t i l g . 2  0  Appendix A. Proofs  214  Suppose q = t. 3i 3 t r< 5ar(<r,NextV) A ... A Sat(a, Next ' 0i) A Sa^cr, Next 0 ) (,  -1)  l  2  Lemma 3.2(1) t z< Sat(a, Next*flf ) and  From (1), Lemma  2  3.2(2).  Vj = 0,... , i - 1, t ^ Sat(a, Next 0i) j  3<? € A (Next*flf ) 3 8 Qa t  Inductive assumption  {  2  Vj = 0,... , i - 1, 38 G A ^ N e x t ^ i ) 3 8 j  Let  5 =  3  Qa  8° U . . . U <J«'  5 G A ^ N e x t V ) E . . . II A^Next**'- )^) II A ^ N e x t ^ ) 1  Construction of A SQa  (3)  8 G A*(0i U n t i l g )  and  (4).  (5), construction of A .  2  Suppose q = f. V*,f ^ 5ar(a, Next V ) A ... A Sat(a, N e x t  (l_1)  0 i ) A 5a/(<r, Next 0 ) {  2  Lemma Either f X 5a?(cr, Next*$f ) or Bj € 0,... , i - 1 3 2  f  3.2(4)  r< Sar(a, Next 0i). J  Lemma Either 38' 3 S e A (Next 0 ) with 8' • cr, or 1  >1  f  !  3.2(3)  Inductive assumption.  %  2  3<P e A ( N e x t ^ ) and 8 C a. f  3  In either case, Vi, by construction 38 E A (Next°(/i) U . . . U A (Next^- )^) U A (Next'flr ) with 8 C cr {  f  f  1  f  i  2  Let 5 = Ll£ <$\ 0  8 e A (01 Until02) f  C cr  Construction of A . f  <S is a complete lattice.  215  Appendix A. Proofs  (<=) Let g G TL, a G S , and assume that 35 G A«($r) such that 5 C a. w  s  s  •  By Lemma A.6, g ^ Sat(S , g). By the monotonicity of Sar, <? ^ Sa?(cr, g). 9  A.2.1  Proof of Lemma 4.4  Lemma A.8 (Lemma 4.4).  Let g  G  TL, and let a be a trajectory. For q = t, f, q ^ Sar(<7, g) if and only if 3 r  s  G  ^(g)  with r C cr. 9  Proo/ (=>) Suppose g X Sat(a, g). By Lemma 4.3, 35 G A % ) such that 5 C cr. 5  s  Let T = T(5 ). Note that T G T % ) by construction and that 8 C r . 9  9  9  9  9  T C cr: the proof is by induction. 9  1-  = <Jg T  cr . 0  2. Assume T / • cr -. 4  3. Since a is a trajectory, Monotonicity of Y E Ct'+l  cr is a trajectory.  *f+l  E  Cr i  Since S • cr.  ^ i  =  ^ i U Y ( 7 f )  Definition of T .  9  i +  9  Property of join.  E CTj+l  ( « = ) Suppose 3T G T % ) such that T • CT. 5  As  9  G T (g), 36 G A % ) such that q  9  5 \ZT . 9  9  By transitivity, S C cr. By Lemma4!3, <? X Sat(6 ,g). 9  By monotonicity g X 5ar(cr, #).  9  •  216  Appendix A. Proofs  A.2.2  Proof of Theorem 4.5  Theorem A.9 (Theorem 4.5). If g and h are TL formulas, then A (h) Qv T\g) if and only if g=$>h.  •  l  Proof. (=>•) Recalling the definition of ==§> on page 71, suppose V r e T (g), 38 e A*(fc) 3  b  h  with^Cr . 3  Suppose t X Sat(a, g). By Lemma 4.4, 3r e T % ) such that r C cr. 9  5  By assumption then, 3S G A (/i), with S Q T . By transitivity, h  9  h  3  C cr.  By Lemma 4.3, g ^ Sat(a, h).  (<£=) Suppose for all trajectories cr, t ^ Sat(cr, g) implies that t -< Sat(a, h).  LetT eT\g). 9  Then by Lemma 4.4, t ^ Sat(r ,g). 9  By the assumption that g=^h, t ^ Sat(r , h). 9  By Lemma 4.3, 3<J e A"(/i) such that S Q r . fc  As T was arbitrary, the proof follows. 9  h  9  •  Appendix A. Proofs  A.3  217  Proofs of Compositional Rules for T L  n  Recall that in this section we are dealing solely with the realisable fragment of TL . n  Theorem A.10 (Identity - Theorem 5.14).  For all g e TL„, g=og. Proof. Let t = Sat(a,g). Clearly then t = Sat(a,g). Hence g=s>g.  •  Lemma A . l l .  Suppose g=C>h. Then Next g=$> Next h Proof. Let a G UT 3 t = Sat(a, Next g).  (1) t = Sat(a>i,g)  (2) t = Sat(a>i,h)  Definition of Sat.  g=$>h.  (3)  t = Sat(Xa> Next h) Definition of Sat.  (4)  t = Sat(a, Next h)  (5)  Next g =>Next h.  •  u  (3), monotonicity of Sat, Lemma 4.8.  Corollary A.12 (Time-shift - Theorem 5.15).  Suppose g=oh. Then Vt > 0, Next*<?=r>Next /i. <  Proof. Follows fromLemmaA.il by induction. Theorem A.13 (Conjunction - Theorem 5.16).  Suppose 0i ==t>hi and g2 =t>/i 2  Then 0i A 02 = t > ^ i  A/J 2  Proof. Let cr e TZT and suppose t = Sat(a, g\ A 0 ). 2  •  Appendix A. Proofs  (1)  t = Sat(a,gi) A Sat(a,g )  Definition of Sat(a,gi A g ).  (2)  t = Sat(a,gi), i = 1,2  Lemma 3.2(2).  (3)  t = Sat(a,hi), i = 1,2  Sinceg =o/i , i = 1,2.  (4)  t = Sat(a,hi) ASat(a,h )  2  2  i  i  (3)  2  (5) t = Sat(a, hihh ) Definition of Sat(a, h A h ). As cr is arbitrary, gi A g =t>hi/\h . 2  x  2  2  2  Theorem A.14 (Disjunction - Theorem 5.17). Supposegi=t>/ii andg =t>h . Then g V g =t>hi V / i . 2  2  x  2  2  Praq/! Let cr G 7?.r d suppose t = Sat(a,gi V g ). a n  2  (1)  t = Sat(a,gi)V Sat(a,g )  Definition of 5ar(cr, ^ V g ).  (2)  t = 5af(cr, ^j), for i = 1 or i = 2  Lemma 3.2(1), Lemma 4.8.  (3)  t = Sat(a, hi), for i = 1 or i = 2 Since g =i>hi, i = 1,2.  (4)  t = Sat(a,hi) V Sat(a,h )  2  2  i=  (3)  2  (5) t = Sat(a,hi V /i ) As cr is arbitrary, gi V g =S>hi V fc . 2  Definition of Sar(<7,  V /i ). 2  2  2  Lemma A.15. Suppose A*(5f) CT> A (/i), a G 7^ and t = Sctf(cr, fc). k  r  Then t = Sat(a,g). Proof. (1)  t^Sat(<T,h)  t = Sat(a,h).  (2)  3£ G A*(fc) 9 < j C c r and t =< Sa^c), fc) (1), Lemma 4.3.  (3)  38' G A^s-) 9 5' • 8  Definition of Q .  (4)  t < Sat(8',g)  Lemma 4.3.  v  Appendix A. Proofs  219  (5)  8' C a  Transitivity of (2) and (3).  (6)  t X Sat(a, g) From (4) and (5) by Lemma 4.3.  (7)  t = Sat(a,g)  (1), Lemma 4.8.  • Theorem A.16 (Consequence - Theorem 5.18).  Suppose g=S>h and A (g) \Z A (gi) and A\hi) Q-p A^h). k  l  V  Then  0i=o/ii.  Proof. Suppose a £ TZj- is a trajectory such that t = Sat(a, gi). (1)  t = Sat(cr,g) Lemma A.15.  (2)  t = Sat(cr,h) g=$>h.  (3)  t = Sat( a,Lemma  (4)  g c>hi  A.15.  Since a is arbitrary.  1=  • Theorem A.17 (Transitivity - Theorem 5.19).  Suppose g =c>h andg =c>h and that A (g ) C. A (g ) II A * ( / i i ) . t  l  Then  1  2  t  2  2  v  1  =t>h .  gi  2  Proof. Suppose a £ TZT is a trajectory such that t = Sat(a, gi). (1)  t = Sat(a,hi)  g =c>hi  (2)  t = Sat(a,giAhi)  (3)  35 £  (4)  A*(01  A hi) = A (g ) II A * ( / i i )  (5)  35' £  A (0 )  1=  Definition of Sat(a, gi A hi). 5  A*(flfi A / i i ) 9 t  l  t  2  3 8'\Z8  C cr  Lemma 4.3. By definition of A*. A\g )n 2  v  A\gi)U  A\hi).  Appendix A. Proofs  220  (6)  8'Q a  Applying transitivity to (3) and (5).  (7)  t d Sat(a,92)  From (6) by Lemma 4.3.  (8)  t = Sat(a 92)  From (7) by Lemma 4.8.  (9)  t = Sat(a h )  g =0>h .  (10)  9i =*>h  Since a was arbitrary.  2  2  2  2  • Lemma A.18 (Substitution Lemma). Suppose |= (\ g=z>h } and let £ be a substitution: then f= i\ £(g)=$>£(h) Proof. Let (j) be an arbitrary interpretation of variables and a £ TZr be an arbitrary trajectory such that t = Sat(v,<f>(Z(g))).  (1) Let0' = 0o£ (2)  t = Sat(a,(f>'(g))  (3)  <f>' is an interpretation of variables  (4)  t = Sat(a,(f>'{h))  \=  (5)  t = Sat(a,(f)(t(h)))  Rewriting (4).  (6)  |= { £( )=t>£(h)} g  Rewriting supposition. By construction.  <j)  $ =i h). g  a n  >  d cr were arbitrary.  •  Lemma A.19 (Guard lemma). Suppose e G £ and |= \g=$>h \: then |= {(e  0)=o(e  /i) ).  Proof. Suppose t = Sat(a, e ^ g) for some a £ 7£<r- Recall that e =^ g = (->e) V 0, and note that Sat(a, ->e) G  By the definition of the satisfaction relation, either:  (i) t = Sat(a, ->e). In this case, by definition of satisfaction, t = Sat(a, e =>• h).  221  Appendix A. Proofs  (ii) t = Sat(a,g). In this case, by assumption Sat(a, h). So, by definition of satisfaction, t = Sat(a, e =>• h).  •  As a was arbitrary the result follows. Theorem A.20 (Specialisation Theorem — Theorem 5.20). Let E = [(ei, £ 1 ) , . . . , (e„, £ )] be specialisation, and suppose that |= {] g=S>h n  ThenHHGjO^E^n. (1)  Fori = 1,... , n , h U.-(flO=i>6('Or-  By Lemma A. 18.  (2)  H(e,-^6(s))=^(e,-=>  By Lemma A. 19.  (3)  h 4 A? (e-  (4)  |= ^S(^)=l>S(/»)^  =1  t  e<(0))=> A?=1(e,- =*  £(/i))  Repeated application of Theorem A. 13. By definition.  • Theorem A.21 (Until Theorem — Theorem 5.21). Supposeg =z>h\ and g2=l>/i2. Then g\ Untilg =f>^i Until h . l  2  2  Proof. Let a £ TZT be trajectory such that t = Sat(a,gi Untilg )a  2  (1)  Definition of Sat,  3i3 t = A}~ Sat(a, Next^i) A Sat(cr, Next^ ) 0  (2)  (3)  2  Lemma 3.2(1), Lemma 4.8.  t = Sat(a,Next g ) and i  2  t = Sat(a, Next^gi), j = 0,... , i' — 1  Lemma 3.2(2).  t = Sa^Next*^) and  g =C>h , Corollary A. 12.  t = 5ar(cr,Next- fci), j = 0,... ,i — 1  g =£>hi, Corollary A.12.  2  7  2  1  (4)  t = AJ~o 5ar(cr Next^i) A San>,-, Next '/i )  Definition of Sat.  (5)  t = Sat(a,hi Until / i )  Definition of Sat.  (6)  gi Untilg =t>hi Until fc  8  i?  2  2  2  2  Since a was arbitrary.  •  Appendix B Detail of testing machines  This chapter presents the details of testing machines. Section B . l formally defines composition of machines. Subsequent sections build on this by showing how testing machines can be constructed and composed with the circuit under test: Section B.2 presents some notation used; Section B.3 presents the building blocks from which testing machines are constructed; and Section B.4 shows how model checking is accomplished. B.l  Structural Composition  The focus of the research on composition is the property composition described in Chapter 5. However, sometimes it is also desirable to reason about different models and use partial results to describe the behaviour of the composition of the models. A full exploration of composing models of partially ordered state spaces is beyond the scope of this thesis — there are important considerations which need attention [129]. A partial exploration of the area is useful though for two reasons: (1) it gives a flavour of how structural composition could be used; and (2) some of the definitions given are needed in justifying the details of testing machines. The content of this section is very technical. Although conceptually the composition of systems is very simple, the notation needed to keep track of the detail is not. This section is included for completeness and the details of this section are not needed in understanding the thesis. This section has three parts. First, composition of models is defined formally. Second, inference rules for reasoning about a composed model is given. The third part elaborates on composition for circuit models, where the definition of composition has natural instantiation. 222  Appendix B. Detail of testing machines  B.l.l  223  Composition of Models  Definition B.l. Let Mi = ((Si, ^i),K Yi),M u  = « S , • ),U , Y ) , and M = ((S, C),7e,Y)be  2  2  2  2  2  models. Let X i , X and X be the bottom elements of Si, S and S; Zi, Z , and Z be their top 2  2  2  elements; and let Gi, G and G be the simple predicates of «S*i, S and S respectively. 2  2  If p : Si x S -» S, pi : G\ —> G and p : G -¥ G then M is a p-composition of Mi and 2  2  2  M if 2  1. pis monotonic; 2. p(X ,X ) = Xand. o(Z ,Z ) = Z; 1  2  /  1  2  3. q = gi(si) ==• q =  pi(gi)(p(s X ));  4. q = g {s ) =>• q =  p (g )(p(X s ));  2  5.  2  u  2  2  2  u  2  p(Yi(si),Y (s ))QY(p(si,s )). 2  2  2  The required properties on p may seem onerous, and indeed in general they may be too restrictive. However, for the application of composition needed in this thesis they are sufficient. In particular, for compositions where the 'outputs' of one circuit are connected to the 'inputs' of another, these conditions will be met. Definition B.2. We inductively extend the domain of the pi by defining • Pi(gAh) = pi(g) A pi(h); • pii^g) =  -"(^(sO);  • pi(Nextg) = Next pi(g);  • pi(g\Jntilh)  = pi(g) U n t i l pi(h).  Appendix B. Detail of testing machines  224  Definition B.3. Let Mi, M and be models and M a p-composition of Mi and Mi- Let cr £ <S^, cr £ <S^. 1  2  2  ^((J ,^ ) = p(ai,a )p(cr ,cr )... 1  2  2  1  1  1  2  Since we are dealing with different models, we modify the notation for the satisfaction relation and use the notation SatM, (cr, g) to refer to whether the sequence cr of the model M j satisfies 9-  Lemma B.l. Let Mi and M be models and M a /^-composition of Mi and M . Let a* £ <Sf, j = 1,2. 2  2  Suppose g £ TL(Sj) and c/ = SatMj{cr\g). Then  ^ Sat (p(a ,a ), 1  2  M  pj(g))-  Proof. The proof is by induction on the structure of g. We assume without loss of generality that j = 1. Suppose q = Sat^ (cr , g) where g is simple 1  (1)  q = g(a )  (2)  q = i(g){p{<Th,X ))  (3)  q^PiigM^a ))  Monotonicity.  Sat (p(cr\cr ),pi(g))  Definition of satisfaction.  (4)  Definition of satisfaction.  J  0  P  Definition B.l(3).  2  2  q±  2  M  Suppose q = Sat {e ,g A gb) l  Ml  SatMi{v -,gw)-, w = a,b l  (1)  Letc/u,  (2)  q = q Aq  (3)  q •<  (4)  q<  =  a  w  a  Definition of satisfaction.  b  Sat (p(a ,a ),pi(g )),w 1  2  M  w  Sat (p(o-\cr ),pi(g Ag )) 2  M  a  b  a, b Inductive assumption. Definition B.2 and satisfaction.  Suppose q = Sat^vi, (cr , ->h) 1  (1)  -ig =  Sat (cr ,h) l  Ml  Sat(a\-^f) = ^Sat(cr ,f) 1  Appendix B. Detail of testing machines  (2)  -. < gr  Sat {p{<j\cT ), {h))  (3)  q^  Sat {p{a ,a ),pi{^h))  225  2  M  Inductive assumption.  Pl  l  2  Definition B.2 and satisfaction.  M  Suppose q = Sat^ij (cr , N e x t h) 1  Sat(a , N e x t h) — Sat(a^ ,h)  (1)  q=  Sat (al h)  (2)  q-<  Sat (p(o'l. ,al ),pi(h))  (3)  q^  Ml  l  1  u  M  1  Inductive assumption.  l  Definition B.2 and satisfaction.  5flf(p(<7 ,<7 ), 5i(Next/i)) 1  2  /  Suppose q = S a t ^ (cr , / i ! U n t i l / i ) 1  2  (1)  Let c/j = Safx^a^g, / i i ) A . . . A Sa?  (2)  q = V? q =0  {o-l^hi)  Sat (o-l h ) Ml  ji  2  Definition of satisfaction.  i  r< 5ar>i(/o((7^ ,<7|o),Pi(^i)) A . . . A5ar7w(p(cr> _ ,cr , _ ),pi(/ii))A 2  (3)  J  0  3  x  2  q± y% (Sat {p{ 0  1  J  1  From (1) by induction assumption.  Sar^ (p (al -, <r| ^), p ( / i ) )  (4)  A  Pr(hi))A . . . A S ^ ( / > ( c r > _ , a | _ ) , p ( / i i ) ) A A 1  M  i  1  J  1  1  (2) and (3)  (5)  q^  SatM(p(cr\cr ),h \]ntilh )  Definition of satisfaction.  2  l  2  • Lemma B.2. Let Mi, M  2  and be models and M a p-composition of Mi and M .  Let i 6 {1,2}, and  2  suppose that: (1) p is a surjection; (2) cr =  p(a ,a ), and t •< 5a?x(cr, pi(gi)) implies that t •< Sat (a\gi). l  2  Then K< 1 Pi(9i)^>Pi{h)) iff KM, 4  Mi  h  226  Appendix B. Detail of testing machines  Proof. Suppose \= . M  <]  gi=$>hi  \  (1)  Suppose t •< Sat (a,  (2)  3a , a 3 cr = p(a ,a )  (3)  t < Sat (cr\gi)  (4)  t -< Sat , (o-'', hi)  (5)  t<  (6)  Therefore ^  M  1  1  p is a surjection.  2  By hypothesis (2).  Mi  Since  M  Suppose (1)  2  pi(gi))  M  <| p  (g ) = § > p ;  {  {  \= . § pi(gi)=§>pi(hi)  (hi))  )  M  X  \.  ByLemmaB.l.  Sat (a,pi(hi))  Suppose t  1=^. t\ gi=?>hi  Sat (<Ti,gi) Mt  (2) t ^ Sar^t (cr, pi (gi)) (3)  t -<Sat (cr,pi(hi))  (4)  t •< SatM {o~,hi)  Lemma B. 1 Sat {cri,gi).  M  Mi  By hypothesis 2.  • B.1.2 Composition of Circuit Models For circuit models, there are several natural definitions of composition that have useful properties. There are, of course, other ways of composing circuits, but the one discussed here is simple and useful. Let Mi = ((C \ n),A ,Yi) m  mi  and M  = ((C \ C ), A \ Y ) be two models. For m  2  m  2  circuit models, the next state function Y : C —> C is represented as a vector of next state n  n  function ( Y [ l ] , . . . ,Y[n]) where each Y[j] : C -> C and Y(s) = (Y[l](s),... ,Y[n](s)). n  To compose two circuit models, we identify r pairs of nodes (each pair comprising one node of both circuits) and 'join' the pairs (i.e., informally, think of these pairs as being soldered together, or physically identical). The state space of the composed circuit consists of mi + m — r 2  components. The first mi — r components are the components of Mi that are not shared with  Appendix B. Detail of testing machines  M. 2  227  The next m — r components are the components of M 2  2  that are not shared with Mi. The  final r components are the r components shared by both Mi and M . 2  The formal definition of composition is a little intricate since it identifies state components by indices. The difficult part of the definition is identifying for each state component in the composed circuit the component or components in Mi and M  that make it up. The idea is 1  2  simple — the book-keeping is unfortunately off-putting. Let  Ii — ( a i , . . .  ,  a ), and Ji = (a[,... , a' _ ) be lists of state components of Mi. If r  r  s £ Si, then the s[aj] are the components of the state space that are shared with M and the s [a'j] 2  axe the components of the state space that are not. We place the natural restriction that Ii and J  x  are disjoint, and that their elements are arranged in strictly ascending order. I  2  and  J = (b[,... , b' _ ) are the corresponding of lists for M . Each 2  m2  r  2  (a,-,  = (61,...  ,b ) r  bj) pair is a pair of  state components that must be 'joined'. Let convi(j) be the component of M to which the j-th component of Mi contributes. Formally, define  ( when 3a' € J i 9 a' = j  k  k  mi + m — r + k  when Ba^ € h 9 a-k — j  mi - r + k  when 3b' £ J 9 b' = j  mi + m — r + k  when 3bk £ h 3 bk = j  2  2  Since the  k  k  2  k  Ii and J , are distinct, we can define an inverse to convi. Define indexi(j) = k where  convi(k) — j. Note that indexi is not defined on all of {1,... , mi + m — r], but that where 2  it is defined,  indexi(j) is the component of the state space of Mi which contributes to the j-th  component of M. With this technical framework, composition can be defined easily. If si £ In practice, composition is a lot easier. Nodes are labelled by names drawn from a global space. We use the convention that if the same name appears in both circuits, then the nodes they label are actually the same physical node. Thus, the pairs that must be connected are implicit and do not have to be given. 1  228  Appendix B. Detail of testing machines  C  mi  and s e C™ , define Q(S S ) by 2  2  U  2  Q{SI,S ) = (si[indexi(l)],... , si[indexi{mi — r)], 2  + 1)],... , s [index (mi + m  s [index {mi — r 2  2  2  2  2  —  2r)],  si[mflfexi(mi + m — 2r + 1)] U s [index (mi + m — 2r + 2  2  2  2  1)],... ,  .si[/ndfexi(r)] U s [ina'ex (m + m2 — r)] ) 2  2  1  Part of defining the composition is to define the mapping from simple predicates in M i and M to M. For T L , this is easy since it is only for predicates of the form [j] that a non-trivial 2  n  mapping has to be defined.  Define convi(j)]  when g = [j] for some j when g a constant predicates.  Then define  where  Y[j](s)  Yi[wwfej:i(j)]((s[co/iVi(l)],... ,s[coravi(rai)])),  j < mi — r  Y [i/Mte*2(j)]((s[c0/iv (l)],. • • ,s[c0/iv (m )])),  mi — r < j < mi +  2  2  2  2  Yi[/nflfexi( ;)]((5[convi(l)],... , s[convi(mi)])) U i  \Y {index {j)){(s[conv {\)],... , s[corav (m )])), 2  2  2  2  Then X is a ^-composition of A ^ i and M , denoted 2  2  ?m\ + m  g{M\,M ). 2  2  —  2r < j  m  2  —  2r  229  Appendix B. Detail of testing machines  Lemma B.3. Q meets all the criteria given in Definition B. 1. Proof. (1) Q is monotonic. Q is defined component-wise. Each component is constructed from the identity and join functions. Since both of these are monotonic, monotonicity follows.  (2)  £(U ,U ) = u TOl  m2  mi+m2  ~  r  and£(Z ,Z m i  T O 2  ) =  Z  -  mi+m2  r  Follows straight from the definition of g.  (3)  q = 51(61) implies thatq ^ /0i(0i)(y>i(si, U™ )) 2  a.  Suppose q = # i ( s i ) , let s = Q(SI, U ) .  b.  si[j] = s[convi(j)]  c.  Let ^ G C i .  m2  From definition of g(si, U ™ ). 7  Suppose g — [j] for some j. x  d.  P i ([j]) = [convi(j)]. Definition of 0 i .  ef.  =9i(si) gi( )(s) = q gi  (b)and(d). By assumption and (e).  Otherwise gx must be one of the constant predicates {_L, f , t, T } . g-  Qi{9i)=9i  Definition of g.  h-  q = Qi{gi)(s)  Q\{g\) is constant.  (4) Similarly q = g {s ) => q r< 02(02X0(1^, s )). 2  2  2  (5) ^ ( Y i ( s i ) , Y ( 3 ) ) r< Y ( ^ ( s i , s ) ) . Proved by showing for all j , 2  2  2  2  230  Appendix B. Detail of testing machines  Q{Y (s ),Y (s ))\j]^Y\jMs s )). 1  2  1  2  u  2  Suppose j < mi — r. £(Yi( i),Y (s ))[j] 5  2  2  = Yi(si)[i/ufe*i(j)]  = Y [indexi(j)](s ) 1  1  = Yi[indexi(j)] = ((s[convi(l)],... ,s[co/iVi(mi)]))  = Y[j](.) Similarly if mi — r < j < mi + m — 2r, 2  ^ ( Y i ( i ) , Y ( ))[j] = Y [/mfcc (j)](s ) = 5  2  52  2  2  2  Y[j](s)  Suppose mi + m — 2r < j 2  £(Yi(6i),Y (. ))[j] 2  2  = Yi(si)[tmfej:i(j)] U Y (s )[m^x (j)] 2  2  2  = Yi[mJexi(j)](si) U Y [iWex (j)](s ) 2  2  2  = Y[j](,)  •  Lemma B.3 is important because it means that Lemma B . l can be used. Furthermore, where composition of machines is done is such a way that the 'outputs' of one machine are connected to the 'inputs' of the other (so there is no 'feedback' — signals go from one machine to the other, but not vice  versa.), Lemma B.2 applies too (to the circuit that provides the outputs).  This definition is dependent on 7 I , J\ and J ; for convenience, the following short-hand 1?  is used:  Q(A,  B)(ri,...  , r*)  2  2  refers to the composition of A and B where the r . . . , 1 ?  ponents of A are shared with the first k components of B; formally Ii =  (r  l 5  ...  com-  ,r ), J = k  x  (1,... , r - 1, r i + 1,... , r - 1, r + 1,... , k), I = (1,... , k) and J = (k + 1,... , k). x  fc  fc  2  2  Appendix B. Detail of testing machines  B.2  231  Mathematical Preliminaries for Testing Machines  This section assumes the state space of the system is C for some n. There is nothing inherent n  in the method which limits the state space to this. However, from a notational point of view it is easier to explain the method with this simple case; furthermore, this is the important, practical case. The method generalises easily to an arbitrary complete lattice. Suppose that M i and M have both been derived from a common machine, M , using a 2  sequence of compositions. (Assume that M = (C ,Y*), M i = ( C " n  (C  + m i  , Y i ) and M  2  =  , Y ).) By the definition of composition the two next state functions Y and Y re-  n+m2  2  x  2  stricted to C are identical and the same as Y * . n  M{ (i = 1,2) consists of M and a tester T . The relative composition of M i and M with 2  2  respect to M is the composition of M , 7\ and T . All of this could be described by composition, 2  but it is convenient to define a specific notation. Formally, reLcomp (M , M ) = (C \ Y ) , n  M  1  2  where n' — n + mi + m and if the current state is s, (ti,... , t >) = Y(s) is defined by: 2  n  when 1 < j < n + m\ ^Y ((si,... , s , s 2  B.3  n  n + m i +  i , . . . ,s >))\j] when n + m i < j < n' n  Building Blocks  Basic Block BB  A  Some predicates may depend (in some way) on the value of node 1 at a time t\ and node 2 at time t . (Formally, a 'node' is a component of the state space; informally since we are reasoning 2  about physical circuits nodes are wires in the circuit.) The purpose of BB A is to provide delay slots so that both values of interest are available at the same 'time'. BBA takes two parameters: a function g : C —> C which is used to combine the values, and n which indicates how many delay slots need to be constructed. Figure B . l depicts BBA(g,  4).  Appendix B. Detail of testing machines  232  a  Figure B . l : BB (g,3): a Three Delay-Slot Combiner A  BB (g, n) consists of two nodes which act as input nodes, n delay slots, and one node which A  is the output value of the machine. The two inputs nodes are typically part of the original circuit, which is why BB s A  next state function does not affect the first two components. Formally, JL,  BB (g,n) = (C ,Y)  when j = 1,2;  where if t = Y(s),then^ = <  n+3  A  when j = 3 , . . . n + 2;  whenj = n + 3. The comp Jest operator adds the BB circuit to an existing circuit. Given a machine M 5r(si,5 _i), n  A  and a predicate g which depends on the value of i at time 11 and i at time t , the composite x  2  2  machine ( M plus the testing circuit) is defined by: compJest(M,g,  (W2))  QiMiBBAfah  -t ))(H,i )  ifh>t  g(M,BB (g,t  -ti))(i ,ii)  otherwise.  A  2  2  2  2  2  (B.l)  The problem with the BB testing machine is that if the two defining times are far apart, A  the testing circuit could be large due to the need to retain and propagate values, which has both space and computation costs associated. There is an alternative approach - build a memory into the circuit which keeps the needed information. Define BB (g) A  = (C , 5  Y ) . If the state of the machine is ( s i , . . .  , s ), 5  think of s  x  and s as the inputs, and s as the output. s is used as the memory, and s indicates whether 2  5  4  3  233  Appendix B. Detail of testing machines  the memory's value should be reset or maintained. Formally, when j  1,2,3  when j  4  and s = 0  whenj  4  and s = 1  when j  5  3  (B.2)  < 3  Define g{M,BB {g))(i ,i ) 1  A  compJest(M,g,  if * i >  2  t  2  (B.3)  (ti, «i) ( < 2 , i ) ) 2  p ( M , BBA(g)){i2,  i\)  otherwise.  Although in general the definition of equation B.3 will be more efficient, it cannot always be used. To see why consider this example. Let g and h be two predicates containing no temporal operators, where the result of g can be found at node i and the value of h at node i . Suppose x  2  we want to evaluate the predicate g A Next /*. Implicit in this is that we are interested in i at 3  x  time 0 and i at time 3. For this we could use the second implementation of comp 2  and get  Jest  the new machine compJest(M,  (Xx, y.x A y), (0, i i ) , (3,  i )). 2  Now suppose that we are interested in the predicate  Exists  [(0,10)] (g A Next /i). 3  This  asks whether there is a time t between 0 and 10 such that g holds at time t and h holds at time t + 3. For this predicate, the second implementation will not work since it only remembers the value of g at one particular time, and we need to have the value of g at a sequence of times. The general rule in choosing between the implementations is that if the predicate for which the tester being constructed is within the scope of temporal operators such as E x i s t s and G l o b a l , then thefirstimplementation must be used; otherwise the second, more efficient implementation can be used.  Appendix B. Detail of testing machines  234  Basic Block BB B BBB is used when we need to combine the value of a predicate at a number of different times. For example, Globalg and E x i s t s g depend on the value of g at a sequence of times. Define BB (g,k) B  = ( C * , Y ) where if t = Y(s) then: + 2  JL  when j = 1  t = {S!  when j = 2  3  f(sj,Sj-i)  otherwise.  Figure B.2 depicts this graphically.  Figure B.2:  Basic Block BB  BB (g,4) B  c  This is just a simple latch with a comparator. 5 5 c = (C , Y ) where Y ( ( s i , s , s )) = (-L 3  2  ,5 ,5l = 2  S2).  Inverter Define / = ( C , Y ) where Y ( ( s i , s » = ( J - , - - * ! ) . 2  2  B.4  Model Checking  This section shows how to accomplish the following:  3  235  Appendix B. Detail of testing machines  Given a machine M and an assertion of the form |= t\ A=Og  construct a ma-  chine M' and trajectory formulae A', C such that |= \ A=f>g }  -£=^>  |=M»  Every temporal formula, g, has an associated tuple (i,t, A, M')\ i indicates that the formula can be evaluated by examining the i-th component of the state space of the new machine; t indicates the time at which the component should be examined; A' gives a set of trajectory formulas which are used as auxiliary antecedents for the new machine; and M' is the new machine. The tuple is defined recursively on the structure of the temporal formula.  2  1. g is ([i] — v). The tester which checks this compares the value of node i to v. The associated tuple is (n + 2,1, A', M'), where • A' = ([n + • M' =  2]=v)  g(M,BB ){i). c  2. g is Next g'. This does not require any extra circuitry — the tester that tests g' is already j  built in, and the only difference is that the result is checked at a different time. If the tuple associated with g is (i,t, A, M'), the tuple associated with Next jg is (i, t + j, A, M'). 3. gis~>g'. If the tester for g' is already built, an inverter will compute the answer for g. So, if the tuple associated with g is (i, t, A, M'), the tuple associated with -•/ is ( | M " | , t + 1, A, M"), where M" = g(M, 4. g is g'(gi,g2). Typically g' would be conjunction or disjunction. The tester takes as its input the results of g and g and applies g' to them. Let the tuple associated with g be x  2  x  (ii, t , A i , M i ) and the tuple associated with g be (i ,t , A , M ). Assume that \M\ \ — x  2  2  2  2  n + mi and \M \ = n + m . 2  2  2  Note that in this discussion M refers to the original machine, and n =  \M\.  2  236  Appendix B. Detail of testing machines  The tuple associated with g(gi,g ) is (|M'|, max(£i,tf )+ 1, M A A , M') where 2  2  M ' = compJest(M",g,(t ,n+ml),(t n+ml+m2)) 1  5.fi-is Global  [(i, j)]  2l  2  andM" = rel.comp (Mi, M ) . M  This can be computed as Next '(NextV A . . . 8  2  A^ext^-'V))-  Evaluating this directly is too inefficient (since lots of redundant work will be done). The following approach computes g' exactly once and then provides appropriate circuitry to combine this value produced at various times. If the tuple associated with g' is (i t, A 1?  1 ;  M ), where \M | = mi then the new tuple x  X  associated with g is (|M'|, t + 1, A ' , M') where: . M' = g(M, BB ((Xx, y.x A y), (j - i)))(n) B  A smaller, more efficient testing machine can be built provided that Global operator is not nested within another temporal operator. In this case, suppose the tuple associated with g is (i t , A , M ), where | M i | = mi. The new tuple is (|M'|,*i + 1,A',M') u  x  x  x  where: • M' — g(M,M")(ii) • M" = (C ,Y) 2  • ift = Y(s) then: when j = 1. |^si A s  2  when j = 2.  • A'= Ai A(Next* [mi + 1] = 1). 1  6. g = Exists [(i,j)] g'. This is analogous to the Global case, and can be computed as Next'(Next V V . . . V (Next^' '^')). All the remarks pertaining to the Global oper-4  ator apply to the Exists operator - the difference is that instead of conjunction being 3  Recall that M is the underlying machine.  3  Appendix B. Detail of testing machines  237  applied, disjunction is applied. This shows that De Morgan's laws have a direct correspondence in the testing machines. 7. The bounded strong until, weak until, and periodic operators are all derived operators (see Definition 3.8). A straight-forward approach to model-checking these operators is modelcheck their more primitive definitions. For all three, smaller and more efficient machines are possible too. There are competing threads here—the more operators the easier it is for a verifier to express properties, but with the wider choice comes the cost of greater complexity for the verifier. Constructing testing machines for the derived operators using the primitive definitions is not the most efficient approach: if the operators are going to be used, optimised testing machines should be constructed; but, if they are not going to be used the verification system will be more complicated that it needs to be. There are two further types of optimisations which could be done. Testing machines are not canonical—there are different ways with different complexities of evaluation. The rewriting of formulas could yield improvement. The other issue has been discussed already: in some cases the testing machine needed for a formula depends on whether that formula is embedded within other temporal operators. If a formula stands by itself, then its satisfaction can checked by examining one component of the testing circuit at one instant in time. However, if the formula is embedded within some of the temporal operators, then we need to know the satisfaction of the formula at a number of instants in time.  Appendix C  Program listing  Cl  F L Code for Simple Example 1  l e t c_size = bit_width; l e t bwidth = ' c_size; l e t i = IVar " i " ; l e t j = IVar l e t 1 = IVar "1"; l e t GND = "GND"; let  "k";  a = Var "a";  l e t A = "a"; l e t D = "d"; let let let let  " j " ; l e t k = IVar  l e t B = "b"; l e t E = "e";  l e t C = "c"; l e t FNode = " f " ;  Globallnput = ((A ISINT i)_&_(B ISINT j)_&_(C ISINT k)) FROM 0 TO 100; listl = [ ( " i " , 1 upto 8), ("j", 9 upto 16), ("k", 17 upto 24)]; varmapl = BVARS l i s t l ; varmap2 = BVARS (("a", [ 2 5 ] ) : l i s t l ) ;  l e t A l = Globallnput; l e t C l = D ISBOOL ( i > j) FROM 10 TO 100; l e t TI = VOSS varmapl (Al = = » Cl) ; l e t A2 = Globallnput _&_ ((D ISBOOL a) FROM 10 TO 100); l e t C2 = Globallnput _&_ ((E ISINT i WHEN a) _&_ (E ISINT j WHEN (Not a)) FROM 20 TO 100); l e t T2 = VOSS varmap2 (A2 = = » C2); l e t A3 = E ISINT 1 C ISINT k FROM 20 TO 100; l e t C3 = FNode ISINT (1 '+ k) FROM 50 TO 100; l e t T3 = VOSS varmapl (A3 = = » C3) ; l e t proof = l e t GI = SPTRANS [] TI T2 i n l e t G2 = SPTRANS [] GI T3 i n G2 ;  238  Appendix C. Program listing  C.2 let let let let let let  F L Code for Hidden Weighted Bit N = bit_width; x = IVar "x"; InputNode = " InputNode"; BufferNodel= " b u f f e r l " ; Chooser = "chooser"; "error"; Error  l e t varmapl = BVARS([("x",1 let  239  let let let let  j = IVar " j " ; CountNode = "CountNode BufferNode2= "buffer2"; Result = "result";  upto N)]);  BufferTheorem = VOSS varmapl ((InputNode ISINT x FROM 0 TO 1000) = = » ( (BufferNodel IS INT x) _&_ (BufferNode2 ISINT x) FROM 5 TO 1000));  letrec add_bits x num = x = N => BIT2 ('N) num | (BIT2 C x ) num) '+ (add_bits (x+1) num); letrec count_of num = add_bits 1 num; l e t CounterGoal = (BufferNodel ISINT x FROM 0 TO 990) ==» (CountNode ISINT (count_of x) FROM 400 TO 990); l e t CounterTheorem = VOSS varmapl CounterGoal; l e t stagel  = CONJUNCT BufferTheorem (AUTOTIME [] BufferTheorem CounterTheorem);  l e t seg x = BWID ('Nbit) x; l e t kthBit k var = (BIT2 ('k) var) '= (BIT2 ( ' 1) C D ) ; letrec case_analysis var j = l e t r e c case k = " k=l => Result ISBOOL (kthBit k var) WHEN (j '= (seg C k ) ) ) | (Result ISBOOL (kthBit k var) WHEN (j '= (seg ('k)))) _& (case (k-1) ) i n case N; i n f i x 3 ISBOOL_VEC; l e t r e c ISBOOL_VEC [x] [y] = x ISBOOL y A ISBOOL_VEC (x:rx) (y:ry) = (x ISBOOL y) _&_ (rx ISBOOL_VEC r y ) ; let  ChooserGoal= ((CountNode ISINT j FROM 0 TO 400) _&_ (BufferNode2 ISINT x FROM 0 TO 400)) ==>>  ( ( (case_analysis x j ) _&.  Appendix C. Program listing  ( E r r o r ISBOOL  240  (seg('O) '= j ) ) ) FROM 300 TO 400);  l e t ChooserTheorem = l e t Proof = ALIGNSUB  C.3  VOSS varmap2 ChooserGoal; [] s t a g e l  ChooserTheorem;  F L Code for Carry-Save Adder  l e t A = Nnode "A"; l e t D = Nnode "D";  l e t B = Nnode "B"; l e t E = Nnode "E";  l e t a = Nvar "a";  l e t b = Nvar "b";  l e t C = Nnode "C";  l e t c = Nvar " c " ;  l e t bdd_order = o r d e r _ i n t _ l [b, c, a ] ; let  range  (bit_width-l)--0;  let let  sum_lhs sum_rhs  (D+E) « r a n g e » ; (a+b+c) « r a n g e > > ;  let let  Antl Conl  ((A == a)??) and ((B == b) ??) and ((C NextG 3 ( (sum_lhs == sum_rhs) ? ? ) ;  l e t T l = prove_voss bdd_order adder A n t l C o n l ;  C.4  F L Code for Multiplier  // let let let  miscellaneous h i g h _ b i t = e n t r y _ w i d t h - 1; // 0 . . e n t r y _ w i d t h - l max_time = 800; out_time = 3;  II-  Node, v a r i a b l e  declarations  let let let let let  A = Nnode AINP; B = Nnode BINP; RS i = Nnode (R_S i ) ; RC i = Nnode (R_C i ) « (high_bit-l)--0» ; T o p B i t i = Nnode (R_C i ) < < h i g h _ b i t » ;  let let let let  a b c d  = = = =  (Nvar "a") « (entry_width-l)--0»; (Nvar " b " ) « (entry_width-l)--0»; Nvar " c " ; (Nvar " d " ) « (high_bit-l)--0»;  c)??) ;  Appendix C. Program listing  let // let  partial BDD v a r i a b l e  {n  ::  241  int}  = c  ordering for  <<(n+high_bit)--0>>;  each  stage of  multiplier  m_bdd_order { n : : i n t } = n = 0 => o r d e r _ i n t _ l [b, a] | n=entry_width => o r d e r _ i n t _ l [ p a r t i a l n , | o r d e r _ i n t _ l [b<<n>>,  let  zero_cond  let  interval n = n <= e n t r y _ w i d t h => [ ( ' ( n * o u t _ t i m e ) , 'max_time)] | [('(n*out_time+2*entry_width),  let let  InputAnts  i  =  ((TopBit  d] a,  = Always  p a r t i a l n,  d];  i)==('0))??;  'max_time)];  ( i n t e r v a l 0) (( (A == a) ??  ) and ( (B == b) ??)); OutputCons = l e t l h s = RS e n t r y _ w i d t h i n l e t r h s = (a * b) « ( 2 * e n t r y _ w i d t h - l ) - - 0 » in Always ( i n t e r v a l (entry_width+l)) ((lhs==rhs)??);  / / A n t e c e d e n t f o r row n o f t h e m u l t i p l i e r let MAnt { n : : i n t } n = 0 => A l w a y s ( i n t e r v a l 0) ( ( (A == a ) ? ? ) a n d ( ( B « n » == b « n » ) ? ?  )  ) |  // let  Consequent  of  Always  row n o f  res_of_row n = l e t p o w e r n = Npow l e t l h s = (RS n) +  the  ( i n t e r v a l n) (( (A == a ) ? ? ) a n d ( ( B « n » == b « n » ) ? ? ) and ( (RS (n-1) == ( p a r t i a l (n-1)))??) a n d ( (RC (n-1) == d ) ? ? ) and ( z e r o _ c o n d (n-1)) ); multiplier  ('2) ('n) in (power ( n + l ) ) * ( R C  let rhs = n=0 => a * b < < 0 » | ( ( p a r t i a l (n-1)) +(power n) ( ( l h s == r h s ) ? ? ) ;  * d)  n)  +  in  (power  n) * a  * (b « n » )  in  Appendix C. Program listing  Con_of_stage  n n  242  =  let  power  let  l h s =  (RS n )  let  rhs =  a * b«n--0» i n (interval (n+1))  Always  = Npow  ((lhs  MCon  {n::int}  +  ('2)  ('n)  (power  ==  rhs)??  =  Always  i n  (n+l))*(RC  and  (zero_cond n));  n  bdd_order  =  let  ant  = MAnt  let  con  preamble_thm start  (m_bdd_order  n  m  start; previous_step  = Mthm  let  curr'  = GenTransThm  let  current =  =  =  n i n previous_step curr i n  Postcondition (Con_of_stage  n)  curr'  i n  m => |  adder_proof  ant con;  0 i n  InputAnts  curr  main_stage  n) i n  multiplier  let  n  current do_proof_main_stage  = do_proof_main_stage  (n+1)  m  current;  1 high_bit  preamble_thm;  =  post_ant_cond  =  ((  (RS h i g h _ b i t )  ==  ((  (RC h i g h _ b i t )  ==  d)??)  (TopBit high_bit)  ==  ('0))??)  ((  n));  =  do_proof_main_stage  let  (zero_cond  n i n  bdd_order  = Mthm  Precondition  let  ) and  = MCon n i n  prove_voss  let  n  =  let  let  (n+1))  (interval  ((res_of_row Mthm  n) i n  (partial  high_bit))??)  and  and  in let  post_ant  = Always  (interval  let  power  = Npow  let  rhs  =  let  post_con_cond  =  let  post_con  = Always  entry_width)  post_ant_cond  in  post_con_cond  proof  ('entry_width) high_bit)  i n  + power  ((RS entry_width) (interval  -=  * d)<<(bit_width-l)--0»  i n  rhs)?? i n  (entry_width+l))  i n  prove_voss let  ('2)  ((partial  (m_bdd_order  = GenTransThm  entry_width)  main_stage  multiplier  adder_proof;  post_ant  post_con;  Appendix C. Program listing  C.5  243  F L Code for Matrix Multiplier Proof  // miscellaneous let let let let // let let // // let  high_bit = max_time = clock_time out_time =  entry_width - 1 ; // 0..entry_width-l entry_width < 10 => 100 | 10*entry_width; = max_time; // half a clock cycle 3;  prove_result = prove_voss_fsm; prove_result_static = prove_voss_static; Node, variable declarations global Clock = Bnode CLK;  // let let  individual c e l l s A u v = Nnode (AINP u v); l e t B u v = Nnode (BINP u v ) ; IN_C u v = Nnode (C_Inp u v); l e t OUT_C u v = Nnode (C_Out u v ) ;  let let let let  M = make_fsm sys_array; RS u v i = Nnode (R_S u v i ) ; RC u v i = Nnode (R_C u v i)<<(high_bit-l)--0>>; TopBit u v i = Nnode (R_C u v i) «high_bit>>;  let let let  a b d  let  p a r t i a l {n :: int} = e  = (Nvar "a") « (entry_width-l)—0»; = (Nvar "b" ) « (entry_width-l)—0»; let = (Nvar "d")«(high_bit-l)—0»; let e  c = Nvar "c"; = Nvar "e";  «(n+high_bit)--0>>;  // BDD variable ordering f o r each stage of m u l t i p l i e r let m_bdd_order {n::int} = n =0 => o r d e r _ i n t _ l [b, a] | n=entry_width => o r d e r _ i n t _ l [ p a r t i a l n, d] | o r d e r _ i n t _ l [b<<n>>, a, p a r t i a l n, d]; // timings let Duringlnterval n f = During (n*out_time, max_time) f; l e t r e c ClockAnt n = l e t range = 0 upto (n-1) i n l e t false_range = map (\x.(('(2*x*clock_time), '(2*x*clock_time+clock_time-l)))) range i n l e t true_range = map (\x.('(2*x*clock_time+clock_time),  244  Appendix C. Program listing  '(2*(x+1)*clock_time-l))) (butlast range) i n (Always false_range ((Clock == Bfalse)??)) and (Always true_range ((Clock == Btrue )??)); let let  InputAnts u v = Duringlnterval 0 ((A u v '= a) and zero_cond u v i = TopBit u v i  (B u v '= b));  '= ('0);  // Antecedent for row n of the m u l t i p l i e r let MAnt u v {n::int} = n =0 => Duringlnterval 0 ( ( A u v '= a ) and ( (B u v)<<n>> '= b<<n>>)) | Duringlnterval n (( A u v '= a) and ( (B u v)<<n>> '= b<<n>>) and ( RS u v (n-1) '= ( p a r t i a l (n-1))) and ( RC u v (n-1) '= d) and ( zero_cond u v (n-1)); // Consequent of row n of the m u l t i p l i e r let res_of_row u v n = l e t power n = Npow ('2) ('n) i n l e t lhs = (RS u v n) + (power (n+l))*(RC u v n) i n l e t rhs = n=0 => a * b <<0>> | ( ( p a r t i a l (n-1)) +(power n) * d) + (power n) *a* (b « n » ) i n lhs '= rhs; let  Con_of_stage u v n = l e t power n = Npow ('2) ('n) i n l e t lhs = (RS u v n) + (power (n+l))*(RC u v n) i n l e t rhs = a * b«n--0>> i n Duringlnterval (n+1) ((lhs '= rhs) and (zero_cond u v n));  l e t MCon u v {n::int}  =  Duringlnterval (n+1) ((res_of_row u v n ) and (zero_cond u v n));  let  Mthm u v n = l e t bdd_order = (m_bdd_order n) i n l e t ant = MAnt u v n i n l e t con = MCon u v n i n prove_result bdd_order M ant con;  let  preamble_thm u v = p r i n t (nl^"Doing preamble"~nl) seq  Appendix C. Program listing  let  start  (start  = Mthm  catch  u v  start)  Precondition letrec  245  (InputAnts  do_proof_main_stage let  0 i n  seq  u  v  = Mthm  let  curr'  = GenTransThm  let  current  (print  v)  n m  curr  =  u  u  v  previous_step  =  n i n previous_step curr i n  Postcondition (Con_of_stage  (nl""  Doing  M[""(int2str  "]("~(int2str (current  start;  u)"",  n)~")"~nl~nl)  n  u  v  n)  curr'  i n  "(int2str v ) "  seq  catch current))  seq ( n  =  m  =>  current  | let  main_stage  do_proof_main_stage  u v  adders_proof let  v  (n+1) m  u v  u  v  1 high_bit  (preamble_thm  u v ) ;  =  post_ant_cond  =  (  (RS u  v  high_bit)  '=  (  (RC u  v  high_bit)  '= d ) a n d  (  current);  =  do_proof_main_stage let  u  (TopBit u v  high_bit)  (partial  '=  high_bit))  and  CO))  in let  post_ant  let  power  = Npow  = Duringlnterval  let  rhs  =  let  post_con_cond  let  post_con  ('2)  ((partial =  entry_width  ('entry_width)  high_bit)+ power*d)«(bit_width-l)--0>>  (RS u  v  entry_width)  bdd_order "Doing  = m_bdd_order adder"  prove_result let  cell_out_time  let  register_proof let  c_ant  bdd_order =  c_ant'  M  entry_width i n post_ant  [('(2*clock_time), u v =  clock_time)  i n  seq (post_con  catch  post_con))  seq  post_con; '(3*clock_time))];  =  (((RS and  let  i n  '= r h s i n  (entry_width*(out_time+2), post_con_cond  (print  i n  =  During let  post_ant_cond  i n  u  v  entry_width)  ((IN_C  u  v)  '=  (partial  entry_width))  '= c ) ) i n  =  (ClockAnt (During  2) a n d  (entry_width*(out_time+2), c_ant)  let  c_rhs  =  (partial  entry_width)  let  c_con  =  (OUT_C  v)  let  c_reg  =  prove_result  u  + c i n  '= c _ r h s i n  (order_int_l M  clock_time)  i n  [c, p a r t i a l  entry_width])  Appendix C. Program listing  246  c_ant' (Always  cell_out_time  c_con)  in ((print  "Doing  register")  seq c_con  catch  c_con)  seq c_reg;  //  one_proof  //  u  v:  proves  that  the  (u,v)-th  cell  works  correctly  let  one_proof //  u  Prove  let  m_stage  (m_stage //  v  take  let  =  that  multiplier  parts  = main_stage  catch into  m_stage)  account  new_ants=  u  v  work  seq  clocking  InputAnts  u  v  (ClockAnt  2)  and  //  new_thm show  let  Add  let //  =  catch  comp_proof  let  that  v  u  v  '=  m_stage  c)) i n  i n  works  i n  proof new_thm  a_proof  i n  work  register_proof u  catch  input  seq  GenTransThm  =  proof  u  to the adder =  (IN_C  the ceol  the registers  r_proof  ((r  of  a_proof)  0  new_ants  adders_proof  clocking  Show  part  sum  and  Precondition  the adder  a_proof  (a_proof //  =  the p a r t i a l  and  (Duringlnterval let  (unclocked)  i n  v  i n  r_proof)  seq // let  stick  result  =  them  a l l  together  (normaliseCon  (GenTransThm  comp_proof  r_proof))  i n  result);  letrec  make_cell_row_list  p_proc  u  v  =  v=array_depth =>  []  |  l e t res = p_proc print  (snd  (res letrec u  =  make_proof_list  v  i n  res)) seq  (res:(make_cell_row_list p_proc  p_proc  u  =  array_width => |  [] (make_cell_row_list  (make_proof_list let //  seq  u  (time  p_proc  cell_proof_list Show  that  the c e l l s  p_proc  u  0):  (u+1)); = make_proof_list also  progate  their  one_proof A  and  B  0; inputs  u  (v+1))));  Appendix C. Program listing  let  247  one_proof_propagateA let  ants  =  let  ab_con  = A  let  ab_reg  =  u  v  =  (Duringlnterval u  (v+1)  prove_result  '=  0  (A u  v  '=  a)) and  M  ants  (ClockAnt  2)  i n  a i n  (m_bdd_order  0)  (Always  cell_out_time  ab_con)  i n  ab_reg; let  one_proof_propagateB let  ants  =  let  ab_con  =  let  ab_reg  =  u  v  =  (Duringlnterval (B  (u+1)  prove_result  v)  0  ((B u v)  '= b ) ) a n d  (ClockAnt  2) i n  '= b i n  (m_bdd_order  0)  M  ants  (Always  cell_out_time  ab_con)  i n  ab_reg; let  Apropagate_proof_list  = make_proof_list one_proof_propagateA  0;  let  Bpropagate_proof_list  = make_proof_list one_proof_propagateB  0;  let  cell_proof  let  Apropagate_proof  u  v  = e l (v+1)  ( e l (u+1)  Apropagate_proof_list);  let  Bpropagate_proof  u  v  = e l (v+1)  ( e l (u+1)  Bpropagate_proof_list);  let  em_thm  =  u v  = e l (v+1)  ( e l (u+1)  cell_proof_list);  ([],[],[]);  // //  The  //  components  *_proof_list  //  proof  //  the right  letrec  contains a l l the proofs  o f the hardware  shows  that  when  matrix  work  connected  together  multiplication  InsertActiveTheorem  that  correctly.  the  they  result  addfn [(u,  InsertActiveTheorem addfn  = A  PutActiveTheoremln  [(v,addfn  new_thm  PutActiveTheoremln v => |  [] =  [(v,addfn  new_thm  (u,v,new_thm)  ((au, letrec  of the  produce  ({u::int},{v::int},{new_thm::theorem}) A  individual  The r e s t  alist):brest)  ({v::int},  =  {new_thm::theorem})  em_thm)]  ( v , new_thm)  ((av,a v l i s t ) r v r e s t )  =  = av  (av, addfn  new_thm  avlist):vrest  (av, a v l i s t ) :  (PutActiveTheoremln in => |  u  =  ( v , new_thm)  vrest)  au  (au, PutActiveTheoremln  ( v , new_thm)  alist):brest  (au, a l i s t ) :  (InsertActiveTheorem  addfn(u,v,new_thm)  brest);  []  em_thm)])]  Appendix C. Program listing  letrec  RetrieveTheorem  /\  {u::int}  RetrieveTheorem letrec A  248  u v  {v::int} v  []  GetActiveTheorem  v  ((av,a v l i s t ) : v r e s t ) =  |  //  VERIFICATION  v  RetrieveTheorem  u v  add_fn  thm_list  InpNode let  (InpNode in  (u, v.  vrest  i n  current  alist brest;  =  add_fn  x  y)  thm_list  current;  CONDITION Input  setlnput  v  GetActiveTheorem  (\xAy.InsertActiveTheorem  /// let  GetActiveTheorem  = au =>  InsertActiveList  ([],[],[])  avlist  | u  =  = av =>  i t l i s t  ([],[],[]) =  GetActiveTheorem v  let  •[] =  ((au,a l i s t ) rbrest)  {u  input u v  :: i n t } { v  =  '=  specifications  During  :: i n t } { i : : i n t } (i*2*clock_time,  {n_val  :: N}  =  (i+1)*2*clock_time-l)  n_val)  Identity  input);  let  all  = Nvar " a l l "  let  al2  = Nvar  "al2"  let  al3  = Nvar  "al3"  let  al4  = Nvar " a l 4 "  let  a21  = Nvar  " a21"  let  a22  = Nvar  "a22"  let  a23  = Nvar  "a23"  let  a24  = Nvar  "a24"  let  a31  = Nvar  "a31"  let  a32  = Nvar  "a32"  let  a33  = Nvar  "a33"  let  a34  = Nvar  "a34"  let  a41  = Nvar  "a41"  let  a42  = Nvar  "a42"  let  a43  = Nvar  "a43"  let  a44  = Nvar  "a44"  let  bll  = Nvar  "bll"  let  bl2  = Nvar  "bl2"  let  bl3  = Nvar " b l 3 "  let  bl4  = Nvar  "bl4"  let  b21  = Nvar  " b21"  let  b22  = Nvar  "b22"  let  b23  = Nvar  "b23"  let  b24  = Nvar  "b24"  let  b31  = Nvar  "b31"  let  b32  = Nvar  "b32"  let  b33  = Nvar  "b33"  let  b34  = Nvar  "b34"  let  b41  = Nvar  "b41"  let  b42  = Nvar  "b42"  let  b43  = Nvar  "b43"  let  let  the_inputs // [  b44  = Nvar  "b44";  =  aO  al  a2  a3  bO  bl  b2  b3  //o  ([  '0,  '0,  '0,  '0] , [  '0,  '0,  '0,  '0] ) ,  ([  '0,  '0,  '0,  '0] , [  '0,  '0,  '0,  '0] ) ,  / / l  ([  '0,  '0,  '0,  '0] , [  '0,  '0,  '0,  '0] ) ,  7/2  ([  '0,  '0,  '0,  '0] , [  '0,  '0,  '0,  '0] ) ,  //3  ([  '0,  a l l ,  '0,  '0] , [  '0, b l l ,  '0,  '0] ) ,  //4  ([  '0,  '0,  a21.  '0] , [  '0,  '0] ) ,  //5  ([al2,  '0,  '0,  a31] ,  [b21,  '0,  bl2,  '0,  '0,  bl3]),  //6  ([  '0,  a22,  '0,  '0] , [  '0, b22,  '0,  '0] ) ,  in  (t  '0,  '0,  a32,  '0] , [  '0,  '0,  b23,  '0] ) ,  //8  ([a23,  '0,  '0,  '0,  '0,  a42] ,  [b32,  b24]),  //9  ([  '0,  a33,  '0,  '0] , [  '0, b 3 3 ,  '0,  '0] ) ,  //10  ([  '0,  '0,  a43,  '0] , [  '0,  '0,  b34,  '0] ) ,  / / l l  249  Appendix C. Program listing  ([a34, ([ ([ ([  // let  '0,  '0,  '0, a44, '0, '0, '0, '0,  '0, '0, '0,  '0], '0] , ' 0 ] ,  '0],  [b43,  '0, '0,'0, [ '0, b44, [ ' 0, , ' 0, •o,.. [ '0, '0, '0,  '0] '0] '0] '0]  ) , //12 ) , //13 ) , //14 ) •//15; ]  Output s p e c i f i c a t i o n s timeForOutputs = // 1 2 3 4 //  [ [ [ [ [  6, 7, 8, 9] , 7, 9, 10, 11], 8, 10, 12, 13], 9, 11, 13, 15]  // // // //  1 2 3 4  ] ; .  let  outputFor row c o l = e l c o l (el row [ = = =  timeForOutputs);  let let let let  InputForCells = a d d f i r s t x (a,b,c) addsecond x (a,b,c) addthird x (a,b,c)  ] ; (x:a,b,c); (a,x:b,c); (a,b,x:c);  let  InputAtStage n t h e _ l i s t s = val (avals, bvals) = e l (n+1) the_inputs i n l e t l e f t _ l i s t = map (\x.setlnput A {x::int} 0 n (el (x+1) avals)) (0 upto (array_depth-l)) in let right_list = map (\x.setlnput B 0 x n (el (x+1) bvals)) (0 upto (array_width-l)) i n l e t down_list = (map (\x.setlnput IN_C (array_depth-l) {x::int} n ('0)) (0 upto (array_width-l)))@ (map (\x.setlnput IN_C x (array_width-l) {n::int} ('0)) (0 upto (array_depth-2))) i n l e t r e s l = InsertActiveList a d d f i r s t l e f t _ l i s t t h e _ l i s t s i n l e t res2 = InsertActiveList addsecond r i g h t _ l i s t r e s l i n InsertActiveList addthird down_list res2;  let let  start_step = InputAtStage this_step = start_step;  0 [] ;  let  PropagateVal addfn row c o l okl {ok2::bool} res o l d _ l i s t = okl AND ok2 => InsertActiveTheorem addfn (row, c o l , res) o l d _ l i s t | old_list;  let  PropagateRes row c o l a l l res r e s _ l =  l e t num_step  = 0;  Appendix C. Program listing  l e t c_index = "C"~(num2str(array_width-col-l+row)) i n a l l AND (row*col = 0) => (c_index, res, (row, c o l ) ) : r e s _ l | res_l; letrec A  ProcessStageRow n {row::int} [] so_far = so_far ProcessStageRow n row ((col, colthms):rest) (prop_list, res_l) = l e t make_step (a, b, c) = l e t ok a n = length a > n i n l e t all_thms = (Identity(ClockAnt ((n+1)*2))):(a@b@c) i n l e t ab_inps = (a@b) i n let a l l = ok all_thms 3 i n l e t curr_gen = a l l => Conjunct [cell_proof row c o l , Apropagate_proof row c o l , Bpropagate_proof row col] | length ab_inps = 2 => Conjunct [Apropagate_proof row col,Bpropagate_proof row col] | ok a 0 => Apropagate_proof row c o l | Bpropagate_proof row c o l i n l e t curr_thm = Transform (TimeShift (2*n*clock_time)) curr_gen i n l e t inps = Conjunct all_thms i n l e t res = normaliseCon (GenTransThm inps curr_thm) i n l e t new_l = PropagateVal addfirst row (col+1) (col<(array_width-l)) (ok a 0) res p r o p _ l i s t i n l e t new_r = PropagateVal addsecond (row+1) c o l (row<(array_depth-l)) (ok b 0) res new_l i n l e t new_d = PropagateVal addthird (row-1) (col-1) ((row*col) > 0) a l l res new_r i n l e t new_rl= PropagateRes row c o l a l l res r e s _ l in empty ab_inps => (prop_list, res_l) | (new_d, new_rl) in ProcessStageRow n row rest (make_step colthms);  letrec /\  ProcessStageProof n [] so_far = so_far ProcessStageProof n ((row,rowthms):rest) so_far = l e t current = ProcessStageRow n row rowthms so_far i n (print ("Doing row "~(int2str row)"nl)) seq (current catch current) seq ProcessStageProof n rest current;  250  Appendix C. Program listing  251  let  do_step n start_step = letrec perform m curr_step = l e t current = ProcessStageProof m (InputAtStage m curr_step) ([] , []) i n (print ("Performing step " " ( i n t 2 s t r m)~nl~nl)) seq (current catch current)'seq m =n => [snd current] | (snd current):(perform (m+1) ( f s t current)) i n perform 0 start_step; •  let  output_list  = do_step 15 [];  // present results l e t ShowRes t r e s _ l i s t = e l (t+1) r e s _ l i s t ; let  Show t node = l e t res = ShowRes t output_list i n f i n d (\(x,y,a,b) . (x=node) AND ( (a* {b: : int}') = 0)) res;  let  OutputOfArray row c o l = l e t s t r i p (Always r f) = f i n val (a, th, b, c) = Show (outputFor row col) ("C"'(num2str(3+row-col))) i n s t r i p (con_of th) ;  letrec PrintRowOutput row c o l = (col = array_width+l) => n l ' n l | ("(""(int2str row)~" , " " ( i n t 2 s t r col)~") :"~ (el2str (OutputOfArray row c o l ) ) " n l ) "(PrintRowOutput row (col+1)); letrec PrintOutput row = row = array_depth + 1 => n l I (PrintRowOutput row 1) " (PrintOutput (row+1));  Index  compositionality, 6, 35 completeness, 187 property, 91-113 structural, 222-230 conjunction rule, 94 consequence rules, 96 consequent, 70 CSA, 134, 143 defined,112 verification, 143 verification code, 240  =g>,71 = 0 , 71 C ,42 ET>,73 n,76 ^,47  abstract data type, 120 abstraction, 6, 38 antecedent, 70 assertions, 71, 83  A, A*, A , see defining sequence set data representation, 120 De Morgan's Law f  £,48  B8ZS encoder, 143 BDD, 19 variable ordering, 20, 126 bilattice, 47 Binary decision diagrams, see BDD bottom element model, 42 truth domain, 48 branching time, 45  Q.49  TL, 60, 207 defining pair, 53 defining sequence, 35 defining sequence set, 77-78 defining set, 53 defining trajectory, 35 defining trajectory set, 81 depth, 59 direct method, 131 disjunction rule, 95 domain information, 188 downward closed, 8  C, see circuit model carry-save adder, see CSA characteristic representation, 118 circuit model, 63 correctness, 89 satisfaction relation, 64 state space, 10 complete lattice, 7 composition of models, 223 compositional theory, 35, 185 inference rules  S, 85 FL, 116 future research, 186 generalised transitivity rule, 111 guard rule, 102  TL, 92-104 TL„, 104-106  hand proof, 24 heuristic, 126 252  Index  hidden weighted bit, 141, 239 identity rule, 93 IFIP WG10.5 benchmark matrix multiplier, 162 multiplier, 149 implication <2,50 TL, 57 inconsistency, 42, 70 inference rules implementation, 123-128 theory, 92-104 information ordering state space, 42, 183 truth domain, 48 integer, 120 interpretation, 61 interpretations representation, 117 join, 7 lattice, 7, 183 linear time, 45 logic quaternary, 184 definition, 47-51 motivation, 11 temporal, 21 mapping method, 134 matrix multiplier circuit, 162 meet, 7 min g, 72 modal logic, 21 model checking combined with theorem proving, 31 review, 27-31 model comparison, 23 model structure, 41 monotonic, 8  multipliers example verifications, 147 floating point, 148 IFIP WG10.5 benchmark, 149 other work, 160 verification code, 240 next state function, 43, 45-47 next state relation, 45^17, 187 non-determinism, 43, 187 notation sequences, 56 OBDDs, see BDD V, see power set parametric representation, 118 partial order, 7, 48, 183 partial order state space, 8 power set, 73 preorder, 7, 74 prototype, 15, 31, 123 Q, see logic,quaternary Q -predicate, 52 quaternary logic, see logic, quaternary TZ, see realisable states IZr, see trajectory, realisable range, 54 realisable fragment of TL„, 65, 89 states, 42-43 trajectory, 69 Sf, see trajectory satisfaction scalar, 56 symbolic, 62 simple, 54 specialisation definition, 102 discussion, 99-103  Index  254  heuristic, 127 rule, 103 state explosion problem, 5 state representation, 118, 183 strict dependence, 111 substitution, 100 substitution rule, 100 summary of results, 183-186 symbolic model checking, 30 symbolic simulation, 63 symbolic trajectory evaluation, 83-88 algorithm, 131, 132, 134 introduced, 11 original version, 33 T, T \ T , see defining trajectory set temporal logic, 21 testing machines, 132 theorem prover, 25 combined with model checking, 31-33 combined with STE, 123 thesis goals and objectives, 12-15 outline, 15 time-shift heuristic, 127 time-shift rule, 93 TL algebraic laws, 60, 207 boolean subset, 85, 100 equivalence of formulas, 59 scalar, 52-59 semantics, 56, 62 symbolic, 60-63 syntax, 55, 61 T L , 63—65 trajectory, 69 assertions, 35 _ defining trajectory set, 81 formulas, 34 minimal, 72 realisable, 69 f  n  trajectory evaluation algorithms, 128 transitivity rule, 98 truth ordering, 48 X,42 unrealisable behaviour, 42 until rule, 103 upward closed, 8 variable ordering, see BDD, variable ordering variables, 60 verification condition, see assertions Voss, 31, 116 U,42 Y , 43  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051407/manifest

Comment

Related Items