UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Consistency-based diagnosis using dynamic models Watkins, Andrew. 1999

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1999-0643.pdf [ 4.17MB ]
Metadata
JSON: 831-1.0065035.json
JSON-LD: 831-1.0065035-ld.json
RDF/XML (Pretty): 831-1.0065035-rdf.xml
RDF/JSON: 831-1.0065035-rdf.json
Turtle: 831-1.0065035-turtle.txt
N-Triples: 831-1.0065035-rdf-ntriples.txt
Original Record: 831-1.0065035-source.json
Full Text
831-1.0065035-fulltext.txt
Citation
831-1.0065035.ris

Full Text

Consistency-Based Diagnosis Using Dynamic Models by Andrew Watkins B.Eng. , University of Victoria, 1990 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F Master of Applied Science in T H E F A C U L T Y O F G R A D U A T E STUDIES (Department of Electrical and Computer Engineering) We accept this thesis as conforming to the required standard The University of British Columbia September 1999 © Andrew Watkins, 1999 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. The University of British Columbia Vancouver, Canada Department DE-6 (2/88) Abstract As society grows ever more reliant on increasingly complex technology, so does the importance of being able to quickly detect and diagnose the inevitable failures. Thus, the relatively new field of computer-assisted diagnosis has emerged to help fill this need. To date, expert-systems form the most common paradigm for computer-assisted diagnosis but another approach, that of model-based diagnosis (MBD) , is superior in many respects and offers several advantages over expert-systems. These advantages stem from the fact that M B D uses system models that capture underlying information about both the behaviour and the structure of an artifact. This representation of the artifact allows unanticipated faults to be diagnosed, the system to be simulated, reactions to unusual inputs predicted and makes possible reactive, real-time system control. However, the most important and difficult aspect of M B D is creating useable models; this thesis presents a new approach to modeling for diagnosis that allows simple and declarative models to be written that accurately reflect the behaviour of the artifact of interest. This is done by applying the technique of consistency-based diagnosis (one form of M B D ) to the logic-based languages cc and tec. The cc language allows models to be written using concurrent logic agents and constraints. Tec extends this by allowing temporal models and default logic to be expressed. The application of consistency-based diagnosis to cc represents an important contribution to the field of model-based diagnosis because the concurrent nature of these languages correspond well to the multiple, concurrently operating components found in most artifacts and their use of constraints allow flexible, qualitative models to be written. These benefits are extended by tec, which allows temporal models exhibiting default behaviour to be realized. Together, cc and tec form a new approach to model-based diagnosis that is declarative, expressive, flexible and able to diagnose temporal models with timeouts. u Contents Abstract ii Contents iii List of Tables v List of Figures vi Acknowledgements viii Dedication ix 1 Introduction 1 1.1 Model-Based Diagnosis 1 1.2 Model-Based Diagnosis Using tec 3 1.2.1 The cc Modeling Language 3 1.2.2 The tec Modeling Language 4 1.3 Objectives 5 1.4 Summary of Results 5 1.5 Chapter Overview 7 2 Background 8 2.1 Consistency-Based Diagnosis 8 2.2 Introduction to cc and tec 11 2.3 Diagnosis of Dynamic Systems 15 2.4 Other Approaches 16 i i i 2.4.1 The Rockwell Approach 17 2.4.2 The N A S A Approach 18 2.5 Summary 19 3 Modeling, Fault Detection and Diagnosis with cc 21 3.1 Modeling with cc 21 3.2 Diagnosis of cc Programs 24 3.3 Integrating Fault Models 26 3.4 Summary 29 4 Modeling, Fault Detection and Diagnosis with tec 31 4.1 Modeling in tec 32 4.2 tec for Real-Time Control 36 4.2.1 Tec as a Control Language 36 4.2.2 The Artifact 44 4.3 tec Diagnosis 47 4.3.1 Diagnosis of Tec Programs 47 4.3.2 The System Description 50 4.3.3 The tec Interpreter 51 4.3.4 The Diagnosis Procedure 52 4.4 Active and Reactive Models 55 4.5 Diagnosis of Timeouts 56 4.6 Summary 60 5 Conclusions and Future Work 62 5.1 Conclusions 62 5.2 Future Work 63 References 66 iv List of Tables 3.1 Clp(V) Entailment Rules 23 4.1 Full-Adder Execution Trace 34 4.2 Valid Signals and Values for the L E G O Factory 45 4.3 L E G O Controller Commands . . . ." 47 4.4 Observations of Correct Behaviour for Untimed and Timed Models 57 v List of Figures 2.1 cc Program Syntax 13 2.2 tec Program Syntax 14 2.3 Definition of a J K Flip-Flop Written in tec 14 2.4 Definitions of tec combinators 15 3.1 A N D - O R Circuit 23 3.2 Cc Program for the A N D - O R Circuit 23 3.3 Representation of P A 25 3.4 Delaying Theory Completion 26 3.5 The Three-Bulb Circuit 27 3.6 Agent for the Battery Component 28 4.1 Inverter Wi th Delay 33 4.2 Full Adder with Gate Delays 34 4.3 Relationship Between States and Tec Iterations 35 4.4 State Diagram and Digital Circuit 36 4.5 Tec Model of the State Machine 37 4.6 Interaction Between clp(S), tec and the Driver 40 4.7 Motor-Controlled Pneumatic Valve 41 4.8 Tec Agent For Switching Pneumatic Valve Position 41 4.9 Factory Conveyor Belt 42 4.10 Tec Agent For Controlling the Conveyor Belt 42 4.11 Definitions of tec combinators 43 4.12 The L E G O Factory 44 vi 4.13 Diagram of the Model Factory 46 4.14 Lego Factory System Description in tec 50 4.15 The Six States of the Lego Factory 51 4.16 Simulation Output 53 4.17 Overview of the Diagnostic Process 54 4.18 Active and Reactive Models of the A r m Component 55 4.19 Flexible Conveyor Belt Agent 58 4.20 Reactive, Qualitative Factory System Description in tec 61 vii Acknowledgements I would like to express my sincere thanks to my supervisor, Dr. Greg Bond, for introducing me to this area of research, providing very substantial groundwork that really helped me to get up to speed quickly and for always giving me good advice whenever I reached what I thought was an impasse. I would similarly like to thank Dr. Peter Lawrence for his help, making it possible for me to complete my studies. I have also received great support from my parents Dave and Lois, my sister Lori and cousin Mike Rooney. But most importantly, I would like to thank Denise for her patience and faith. Finally, I would like to thank some of my lab mates who have really helped me: Ying Fang, Kendra Cooper, Maryam Moghaddas, Buke Otkunc and Paria Noorkami. A N D R E W WATKINS The University of British Columbia September 1999 vm To Denise. ix Chapter 1 Introduction The field of computer-aided diagnosis is an expanding discipline with great relevance to today's highly complex world. Within this field, model-based diagnosis is emerging as the most promising technique for diagnosing faults in increasingly complicated, dynamic systems. However, it has been 'emerging' for at least the past 15 years with few successful industrial applications. This lack of success is due, in part, to the difficulty in creating suitable models with current techniques. This chapter provides a brief introduction to the concepts of model-based diagnosis. The limitations of current techniques are then outlined and new modeling languages for diagnosis are proposed. These are the languages cc and tec. The objectives of this thesis are then outlined, followed by the results that were achieved. 1.1 Model-Based Diagnosis Every day our society becomes more dependent on technology. Wi th it many old industries, such as saw mills, have improved efficiency and increased profits. In other situations (computer networks, for example), the technology itself has created whole new markets and the industries to serve them. In either case, the constant pressure to gain a competitive advantage and to improve productivity means that this technology is becoming both increasingly sophisticated and increasingly relied upon. However, along with these complex systems come the inevitable failures: machines break down, circuits fail and software bugs crop up. W i t h such a reliance on technology, diagnosing and fixing these faults quickly has become extremely important and yet, at the same time it has become in-creasingly difficult. The complex nature of today's devices means that a large amount of background 1 CHAPTER 1. INTRODUCTION 2 knowledge is required just to understand the problems that occur and the pervasiveness of technol-ogy means that more people than ever are required to have this knowledge. As diagnosing faults has become more important, the field of computer-assisted diagnosis has emerged to help deal with these problems. Expert systems were the first and are still the most widely used foundation for computer-assisted diagnosis. Expert systems attempt to capture the diagnostic knowledge of a human expert by relating the symptoms of failures to the underlying faults and then storing this knowledge in a 'rulebase'. This knowledge can then be used in the future by presenting the expert system with observed symptoms and letting it use its rules to determine the most likely cause of the failure from its internal list of possible faults. However, expert systems have some inherent disadvantages. One is that capturing this diagnostic knowledge is not easy, as most human experts have difficulty translating their accumulated 'wisdom' into expert system rules. Another, more fundamental disad-vantage, is that that expert systems generally contain knowledge only about how a system fails. This leaves important information about how the system normally behaves out of the expert system's knowledge base. Expert systems generally cannot deal with unanticipated faults because they have no understanding of the underlying structure or behaviour of the system. In the early 1980's a new approach to diagnosis was developed through the work of de Kleer and Brown [9], Genesereth [13] and Davis [7]. This is the model-based approach which uses a representation, or model, of the correct behaviour of the system of interest. Therefore, while expert systems model how systems fail, the model-based approach captures knowledge of how systems work by modeling their structure and how they normally behave. This so called "deep knowledge" [25] is then used to predict the behaviour of the system. In the case of a failure, there will be a discrepancy between the normal behaviour predicted by the model and abnormal behaviour observed in the actual system. It is this discrepancy between the predicted behaviour and the observations that is used to form the diagnosis. These ideas were clarified by Reiter [25] who detailed consistency-based diagnosis, which is one specific form of model-based diagnosis. Wi th consistency-based diagnosis each component in the model is labeled as being either normal or abnormal. The discrepancy between the observed and expected behaviours is used to guide this labeling and if, by labeling certain components as abnormal the discrepancy can be explained, then these components form a diagnosis. These researchers were all members of the artificial intelligence (AI) community, as diagnosis has always been an important area of interest in AI . In developing this model-based approach to CHAPTER 1. INTRODUCTION 3 diagnosis, AI researchers built their models using logic, a tool they were very familiar with. Using logic as the modeling framework has many advantages: logic based models are precise and declarative and their underlying properties are well known. Most applications of consistency-based diagnosis, however, use propositional logic to construct their models. Propositional logic has no variables and its simple form makes it easier to manipu-late than higher-order forms of logic. However, propositional logic can be inflexible and lacks the expressiveness of higher-order logics such as predicate logic, which supports variables. For example, propositional logic can only model an inverter by enumerating all input/output combinations, such as (m_0 —>• out.I) A (in A —> out Si). This approach quickly breaks down when modeling a more complex system, such as a multiplier. However, predicate logic can easily describe a multiplier as \tXW3Z(mult(X, Y, Z) <- Z = X * Y). Simple propositional logic also produces brittle models that are unable to reflect a range of possible behaviours and it has no intrinsic representation of time, needed to create temporal models. Model-based diagnostic reasoning is only as good as the models used [8] and in order for consistency-based diagnosis to be successful, a flexible and expressive modeling framework is of primary importance. 1.2 Model-Based Diagnosis Using tec 1.2.1 The cc Modeling Language In the early 1990's a new approach to logic-based modeling appeared in the form of concur-rent constraint programming. The concurrent constraint (or more simply, cc) approach is due to Saraswat [27] and represents the confluence of two ideas that, up until that point, had been separate areas of research in logic programming. These are the areas of concurrent logic programming and constraint logic programming. Concurrent logic programming originated in the early 1980's [3] as an application of parallel execution of processes to logic programming. This new approach to logic programming however, was hindered by problems with communication and synchronization between the processes and by CHAPTER 1. INTRODUCTION 4 inefficient execution of the goals [26]. Constraint logic programming, on the other hand, deals with using logic programming techniques to evaluate constraints [19, 20, 22]. Constraints are pieces of partial information, such as x < 5 or V — I x R, that restrict the values that variables can hold without necessarily providing specific values for those variables. These concepts were combined in the concurrent constraint programming paradigm using a few simple ideas, one of which was to use a global store as a repository for storing all constraints. Multiple, concurrently executing logic programs, called agents, are then used to interact with this store, adding new constraints and checking if other constraints are satisfied by information already in the store. These two ideas turned out to be complementary. The parallelism of the concurrent agents and the flexibility of the constraints greatly increase the expressive power of the language while the global constraint store solves the problems of communication and synchronization among agents. This new paradigm shows good potential for use as a logic-based modeling language for diagno-sis. Cc agents implement first-order predicate logic, allowing greater expressiveness than is possible with simpler propositional logic and cc's use of constraints allows flexible, qualitative models to be written. Also, the parallel nature of agents mirrors the parallel nature of components that make up a complex system and, finally, models written in cc are compositional, meaning that components can be mixed into the model just by adding new agents. 1.2.2 The tec Model ing Language Cc actually forms the basis for a family of languages, one of which is Timed Default cc, or tec. In tec the basic concepts of cc have been extended in two ways. The first way is with the addition of Reiter's defaults [24]. Defaults specify an action to be taken when certain conditions are not met, similar to the else statement in standard procedural programming languages. Defaults are important because they allow the detection of missing information, something not possible with basic cc. The second way in which tec extends cc is with the addition of the notion of time to the language. This is because a tec program is evaluated as a series of cc executions, each one representing a snapshot in time. Tec greatly improves the modeling potential of cc. The addition of time allows temporal models to be expressed. This is important because most real-world systems change over time and, to be useful, a modeling language must be able to capture this. Defaults too extend the expressiveness of the language. In combination with the temporal properties of tec they allow timeouts to be modeled CHAPTER 1. INTRODUCTION 5 by expressing what should happen if an event does not occur. The ability to model timeouts is important because most real-time systems use timeouts to detect error conditions and the ability to model this aspect of system behaviour means that the causes of these timeout conditions can be diagnosed. Tec seems well suited to creating the models needed for consistency-based diagnosis. The parallel nature of its predicate-logic agents produces models that are declarative and expressive and the ability to easily add in new agents makes the models composible. Furthermore, the use of constraints allows these models much flexibility in the range of behaviours they represent and the incorporation of time and defaults allows temporal models to be created and timeouts diagnosed. 1.3 Objectives The main objective of this thesis is to investigate the use of tec as the language used in creating the models employed by a consistency-based diagnosis procedure and to discover any resulting ramifications. Although the importance of consistency-based diagnosis has grown steadily, it has been pursued mainly as an area of pure research and has not matured into an engineering discipline. This lack of success is due, in part, to the restrictiveness of current modeling techniques. Tec offers much promise toward improving the applicability of consistency-based diagnosis because it allows models to be expressed that are not possible with standard propositional logic. Tec's use of constraints allow more flexible, qualitative models to be written that are consistent with a range of observations making them less brittle in the face of variations of correct behaviour. Also, ice's use of default logic allows missing behaviour to be modeled. The parallel nature of ice's agents make models inherently declarative because the structure of the model closely corresponds to the structure of the artifact. In short, tec allows logic based models to be written that are expressive, declarative, composible, flexible and temporal, and, if these attributes can be carried over to a consistency-based diagnosis procedure significant gains in the field of model-based diagnosis could be realized. 1.4 Summary of Results This research shows that cc and tec can be successfully applied to consistency-based diagnosis. This is an important result because it directly addresses one of the biggest drawbacks of model-based CHAPTER 1. INTRODUCTION 6 diagnosis, the difficulty in creating useful models. In conducting this research, interpreters for cc and tec were written and then modified to incorporate the consistency-based diagnosis procedure, creating a generic framework for diagnosing cc or tec programs or for models written in these languages. Consistency-based diagnosis is first applied to the simpler cc language. Cc offers many ad-vantages as a modeling language and these beneficial properties extend to diagnosis. Most of the standard examples commonly cited in the diagnosis literature are reproduced with models that are short, clear and declarative. In applying consistency-based diagnosis to cc, a process of theory completion is identified, along with its ability to provide sample input and output values for failed components; these values aid in 'explaining' the diagnoses. Furthermore, it is shown that fault mod-els can be incorporated and that this framework can be used to perform another form of model-based diagnosis: abductive diagnosis. This diagnostic approach is then extended to ice, allowing temporal models of artifacts exhibit-ing much more complex behaviour to be written for both simulation and diagnosis. This is important because this diagnostic approach with ice is the first we are aware of that is able to use a first-order temporal modeling language, whereas existing comparable approaches use propositional temporal modeling languages. Additionally, ice's usefulness as a real-time control language is demonstrated by using it to control a model of a small factory that was build as a testbench and which could be modeled and diagnosed. This too is a useful result because not only does it demonstrate ice's applicability in filling very different roles, it also means that faults in both the control program and in the model can be diagnosed with the same procedure. Wi th this testbench, ice's ability to model and diagnose systems that can be represented as a series of discrete states is demonstrated. Finally, it is shown that the inequality constraints of ice allow flexible, qualitative models to be constructed that are able to reflect a range of correct behaviours while still conflicting with observations of faulty behaviour. Furthermore, the temporal and default logic properties of ice allow timeouts to be detected and their causes diagnosed. Timeouts are very commonly used to detect errors in industrial systems and, to the best of our knowledge, this is the first approach able to diagnose possible causes for these failures. CHAPTER 1. INTRODUCTION 7 1.5 Chapter Overview In chapter one, this chapter, an introduction to the concept of model-based diagnosis is presented. The languages cc and tec are also briefly outlined and their superior modeling properties discussed. The most important part of model-based diagnosis is the model and fee's superior expressiveness, flexibility and declarative nature appear to make it ideal. Therefore, this chapter outlines the objective of using tec as the modeling language for model-based diagnosis. Chapter two provides further background into consistency-based diagnosis and the cc family of languages. It then outlines two existing implementations of diagnosis with dynamic models. These examples are used to underscore the desirable qualities of a diagnosis of dynamic systems. Chapter three describes applying the simpler cc language to diagnosis. It is found that a custom constraint system is required to model component ports and that a process of theory completion is needed to check potential diagnoses for consistency with the observations. Chapter four extends the ideas of chapter three. First, creating temporal models and models that use defaults are discussed. Then, the physical artifact that was used as a testbase is described. The most interesting part of this testbase is that tec was used to control the system. Diagnosing faults in this system are described. Finally, creating flexible models and diagnosing timeouts is discussed. Chapter five outlines the conclusions arrived and and outlines future recommendations. Chapter 2 Background This chapter provides a brief introduction to the different concepts which form the foundations of this thesis. At its heart, this research examines the combination of two very different areas of research, those of consistency-based diagnosis and concurrent constraint programming. Together they form a new approach to diagnosis that is both powerful and flexible. This combination has been refined and extended beyond static systems diagnosis to allow dynamic systems to be modeled and diagnosed. This chapter also discusses some of the difficulties associated with diagnosing dynamic systems and examines two existing implementations. While each of these are successful demonstrations of diagnosing dynamic systems, they still have many limitations. It is these limitations that serve to illustrate what characteristics a general, flexible and powerful approach to model-based diagnosis should possess. 2.1 Consistency-Based Diagnosis Consistency-based diagnosis (CBD) is a logical approach to diagnosing faults in systems. This technique relies on models, written in logic, to predict the behaviour of the system that is being diagnosed (also known as the artifact) and is unique in that only the correct behaviour of the artifact is modeled. If the model has been written correctly and the artifact is working properly, then the observations taken from the artifact should be consistent with what the model predicts. However, if the observations are not consistent with the model, then a fault is detected. Consistency-based 8 CHAPTER 2. BACKGROUND 9 diagnosis uses this conflict between the expected and observed behaviours to implicate one or more components of the system which could explain the fault. Each of these combinations of components is a potential diagnosis. This approach contrasts with that of expert systems, which are currently the most widely implemented paradigm for diagnosisfll]. Expert systems are so named because they attempt to capture the knowledge of a human expert using rules of the form if Effect then Cause. When applied to diagnosis, these rules relate symptoms to failures and attempt to capture the knowledge of an experienced diagnostician who can diagnose many different faults based on the observed symptoms. A n expert system uses these rules, in conjunction with the observed symptoms, to deduce possible failures. So, while expert systems attempt to capture knowledge of how a system fails and what are the associated symptoms, the consistency-based diagnosis approach models how the system behaves correctly. The consistency-based diagnosis approach has been said to have a 'deep knowledge' of the system and this permits it to do things that expert systems cannot. The biggest benefit of the consistency-based diagnosis approach is that unanticipated faults can be diagnosed. However, because consistency-based diagnosis is model-based, other important activities such as simulation and planning are possible. As stated above, consistency-based diagnosis relies on logical models of the artifacts being diag-nosed. These models are constructed as a set of interconnected components, their interconnections mirroring the structure of the artifact. The components themselves are modeled as a set of con-straints which define the relationships between inputs and outputs. In the case of an observed fault, it is these constraints which create the inconsistency between the observed and expected behaviours. If, however, these behavioural constraints are lifted, then the the conflict will go away. This then is the essence of consistency-based diagnosis: those components in the system model whose behavioural constraints, when lifted, restore the consistency between the model and the observed behaviour form a diagnosis for the observed fault. Obviously, if the behavioural constraints of every component in the model are lifted, there will never be an inconsistency. Therefore, the goal of the diagnostic procedure is to find the smallest sets of components forming diagnoses, these are the minimal diagnoses. In consistency-based diagnosis the system being modeled is generally formalized as a pair (SD, COM PS) where COM PS is a finite set of constants comprising the components of the system and SD is the system description written in a set of first-order logic sentences which describe the components' behaviours and interconnections. The observations from the artifact are denoted by CHAPTER 2. BACKGROUND 10 OBS and a special (distinguished) predicate ab(c) (where c £ COM PS) is used to indicate that c is behaving abnormally while -iafe(c) means that the component c is operating correctly. In other words ab(c) means that component c is abnormal, while -^ab(c) indicates that component c is not abnormal (i.e. is normal). The system description is written such that each component's normal behaviour is required in the model of the system only if the component has not been labeled as abnormal, therefore SD has the form: SD = {Bc V ab(c) \ c £ C O M P S } where Bc describes the normal behaviour of the component. A labeling of ->ab(c) means that the normal behaviour model for the component is used, while if a component is labeled ab(c), then (Bc V ab(c)) A ab(c) reduces to ab(c) and all behavioural constraints disappear. A diagnosis is then a labeling of all components c £ COMPS as being either abnormal (ab(c)) or normal (->ab(c)), such that consistency with the observations is restored. D e f i n i t i o n 2.1.1 (Diagnosis) If COMPS is partitioned into two sets A and COMPS-A where Ace A ab(c) a n d Ace(coMPS-A) - ,a*(c) then V{A, COMPS-A) is a diagnosis for (SD, COMPS) and OBS iff SDUOBSU {T>(A, COMPS —A)} is consistent. Furthermore, a diagnosis is a minimal diagnosis if there is no subset A ' of A such that V(A', COMPS — A ' ) is also a diagnosis. 1 Although the above definition determines whether a given subset of COMPS forms a diagnosis, it does not describe how to form these hypotheses. A naive approach would be to generate and then test all subsets of COMPS, however, the number of such subsets is exponential in the number of components, making this approach impractical. Fortunately, more efficient techniques exist which, on average, greatly reduce the number of hypotheses which must be checked. This efficiency is gained from the observation that, when only correct behaviour is modeled, all supersets of a diagnosis are also diagnoses. Initially, all components faulty (A = COMPS) is tested as a hypothesis before successively smaller subsets of A are checked. Using this approach minimal diagnoses can be found, one at a time, in polynomial time [2]. Consistency-based diagnosis is not the only method to use logic to diagnose systems. A second approach, called abductive diagnosis [23], uses logic models of how the components can fail. This is similar to the expert system approach except that the cause and effect relationship in the rules is reversed. Abductive fault models have the (arguably more natural [30]) form if Cause then Effect, 'This notation is due to Reiter [25] CHAPTER 2. BACKGROUND 11 modeling how a fault will cause a symptom. With rules of this form, abductive diagnosis involves hypothesizing minimal faults which imply the observered symptoms. In an abductive diagnosis, the diagnoses entail the observed faults, they are really explanations of the observed behaviour [30] as there is a causal link from the diagnosis to the observations. This is in contrast to C B D where the diagnoses are merely consistent with the observations. However, unless one has a complete model of the failure modes of the artifact, the C B D approach is preferable [17]. In reality, consistency-based diagnosis and abductive diagnosis form two extremes of a range of approaches to logic-based diagnosis. C B D techniques may include fault models and abductive tech-niques often include models of normal behaviour in addition to fault models. This thesis examines the addition of fault models to C B D . 2.2 Introduction to cc and tec In the previous section, models were said to be constructed using "behavioural constraints" but no explanation was given as to what form these constraints take and how they are evaluated. This section discusses cc and tec, two closely related concurrent constraint programming languages used for modeling with constraints. A constraint is simply a piece of partial information. Constraints can take forms such as ranges of values like 0 < X < 10, relations between variables such as V = I x R, or they can be combinations of these forms. Constraints are defined over different domains, such as real numbers or booleans, and these domains define what values variables can take and what are the allowed functions and relations. Constraints become interesting when they are grouped together because this allows complex problems to be described. A constraint solver is used to check if all constraints in the group are satisfiable and, if so, what value or values will satisfy them. Concurrent constraint programming [27] (CCP) is a programming paradigm built on top of two different concepts: constraint programming and concurrent logic programming. The foundation of concurrent logic programming provides a framework whereby agents, each of which is a small logic program, can be executed concurrently; constraint programming provides the means by which agents interact. Thus, these two concepts complement one another, providing a simple but powerful mechanism for scheduling the evaluation of constraints. Agents are the basic construct of a C C P program and they communicate through a global constraint store. This constraint store accumulates constraints as they are evaluated by agents and CHAPTER 2. BACKGROUND 12 also provides memory for agents much like conventional programs use variables to store specific values and alter the flow of control. However, it is important to note that the use of constraints make the C C P approach fundamentally different from that of the conventional von Neumann concept of computing. In the latter, memory is used to store specific values, whereas with C C P , memory "stores a collection of pieces of partial information about values that variables can take." [28] The agents in a concurrent constraint program use two basic primitives to interact with one another by way of the global constraint store: ask and tell. Tell operations are used to add con-straints to the store and ask operations are used to determine if a constraint is entailed by the store. The ask operation has three possible outcomes: • If the store contains enough information to imply the truth of the asked constraint, then the ask succeeds. • If the store contains enough information to prove that the asked constraint is false, then the ask fails. • However, if the store does not contain enough information to imply either the asAied constraint or its negation, then the ask operation blocks. This suspends the asking agent until such time as the store contains enough information to entail either the asked constraint or its negation. This blocking action forms the basis of a synchronization primitive. Such a primitive is required in any framework involving concurrent processes. These ideas form the basis of a family of concurrent constraint programming languages, the most basic of which is Determinate CCP, or simply cc. Derivatives of cc, such as tec, extend its expressiveness and flexibility but all members of the family share the same declarative nature where programs are concurrently evaluated logical formulae (i.e. agents). Computation of a cc program begins with an initial agent and an empty store. This agent may start other agents, each of which may add constraints to the store or check if constraints are entailed by the store. As constraints are added, the store evolves monotonically. That is, if the information a leads to a set of constraints, these will also be true given more information a A 6. Computation continues until the program reaches a state of quiescence where no more information is being added to the store. Figure 2.1 shows the grammar of a cc program. The computation of a cc program can only move forward in the presence of positive information; it lacks the ability to detect missing, or negative information. To address this limitation, cc has CHAPTER 2. BACKGROUND 13 (Program) P ::= H :: A \ P.P (Agents) A ::= {c} | if c then A I A, A I [XTA | [X]$,4 I H (Procedure Call) H ::= p(X\,... ,Xn) (Tell) (Ask) (Parallel Composition) (Existential Quantification 3X A(X)) (Universal Quantification V X A(X)) (Procedure Call) Figure 2.1: cc Program Syntax been extended to Default cc by including the expression of defaults. Defaults allow conclusions to be drawn in the absence of conflicting information [24]. Syntactically, this adds (Agents) A ::= | if c else A (Else) to the grammar. Rules of this form are evaluated only after all other agents which could add positive information to the store have run to quiescence. However, evaluating A may add more constraints to the store, possibly even the antecedent c. To make this contradiction explicit we hypothesize the negation of the antecedent (-ic) and add it to the store before evaluating A [15]. Note that the use of else does not have the same meaning as if ->c then A where the store must contain enough information to entail ->c, while the else succeeds when the the-store does not contain enough information to entail c. A further extension to the cc family of languages has been to add the notion of time to Default cc, creating Timed Default cc or more simply, tec [29]. Tec divides time up into a sequence of steps and at each time step a separate Default cc program is executed. Each of these iterations begins with an empty store and there is no relation between successive stores other than those which are made explicit. The representation of time in cc is accomplished with the addition of the next operator which defers an agent's execution until the subsequent iteration. Syntactically, this adds (Agents) A ::= | next A (Next) to the grammar. Figure 2.2 gives the complete grammar for tec. This use of a series of Default cc executions is useful for modeling reactive systems and is based on the hypothesis of Perfect Synchrony [1]. Reactive systems are those which respond to external events (including clock ticks) and the hypothesis of Perfect Synchrony says that this response CHAPTER 2. BACKGROUND 14 (Program) P (Agents) A H ::A\ P.P {c} (Tell) (Ask) (Parallel Composition) (Existential Quantification 3X A(X)) (Universal Quantification A(X)) (Else) (Next) (Abort processing, call A) (Procedure Call) if c then A A,A [X]~A [X}%A if c else A next A aborts H (Procedure Call) H = p(Xu...,Xn) Figure 2.2: tec Program Syntax (including the detection of missing information) is instantaneous. Viewed in this light, a reactive system can be modeled as a sequence of executions, each of which begins with a set of inputs and quiesces with a set of reactive outputs. Clock ticks are just another signal and these ticks can come from more than one source, thus a multiform notion of time is supported. With these extensions, tec becomes a powerful modeling language, able to model and diag-nose systems which change over time and able to detect missing information. In combination with consistency-based diagnosis this forms a powerful diagnostic tool, able to model and predict the behaviour of state-based dynamic systems. Figure 2.2 shows an example of a model of a J K flip-flop written in tec that uses defaults and models time. j k f f ( J , K , Q ) : : i f c l p p ( Q : 0 ) t h e n ( i f c l p p ( J : 0 ) t h e n (nex t ( c l p p ( Q : 0 ) } ) e l s e (nex t { c l p p ( Q : l ) } ) e l s e ( i f c l p p ( K : l ) t h e n (nex t ( c l p p ( Q : 0 ) } ) e l s e (nex t { c l p p ( Q : l ) } ) ) • Figure 2.3: Definition of a J K Flip-Flop Written in tec. The expressiveness of tec can be further extended by defining a series of combinators. These CHAPTER 2. BACKGROUND 15 always(A) whenever C do A do P watching C do P watching C timeout B A , next always (A). if C then A else next (whenever C do A ) . if C else (P, next (do P watching C)). if C then B else (P, next (do P watching C timeout B)). Figure 2.4: Definitions of tec combinators agents, written in tec itself, provide a set of commonly used and powerful constructs. These combi-nators are: U n c o n d i t i o n a l Pers is tence: The combinator 'always(yi) ' will run the agent A in every iteration. E x t e n d e d W a i t : The combinator ' W h e n e v e r c do A' will wait until c is entailed before A is Watchdogs : The combinator 'do P wa tch ing c' will execute agent P in every iteration up until (but not including) the time when c is entailed. The combinator 'do P wa tch ing c t imeout B' behaves similarly but will also run agent B when the store entails c. Figure 2.4 shows the definitions of these combinators. In the real world, most of the artifacts that one would like to be able to diagnose are not static systems, such a combinational logic circuits, they are dynamic systems that change over time. Wi th dynamic systems, the behaviour depends not only on the inputs, but also on when those inputs occur. This makes diagnosis of dynamic systems inherently more difficult than diagnosis of static systems. In their early work on this subject, Hamscher and Davis [18] used digital circuits with state to show that it is rarely possible to diagnose dynamic systems using only the initial inputs and final outputs as the observations. Doing so leaves the problem too unconstrained, causing an unacceptable number of components to become possible diagnoses. The corollary of this result is that diagnosing dynamic systems requires the ability to predict intermediate values in the operation of the model and to be able to compare these with intermediate observations from the artifact. However, this is not always possible in practice as some states and state variables are not directly observable. executed. 2.3 Diagnosis of Dynamic Systems CHAPTER 2. BACKGROUND 16 Therefore, the diagnosis procedure should be able to accommodate whatever temporal observations are available, using them to refine the set of diagnostic candidates. Diagnosis of dynamic systems also requires the use of dynamic models. These models can be used to represent the continuous state-based behaviour of the artifact, or they can be discrete state-based. Discrete state-based models are generally simpler, describing finite state machines where outputs and next-state transitions both depend on the current inputs and the current state. Wi th state-based models, the current state is represented using state variables and time is represented with the state transitions. In modeling dynamic systems with state machines, one difficulty that arises is determining what granularity of changes in the artifact should be represented by the state transitions in the model. If this granularity is made too fine, the diagnostic procedure may become intractable. On the other hand, a representation that's too coarse may not be able to discriminate between different candidates [18]. For example, in the case of a flip-flop it may be reasonable to model state changes at the same granularity as the system clock, but in other cases it may be necessary to use a finer granularity in order to model the delays of the individual gates which make up the device. The author is unaware of a formal approach to choosing the correct granularity for state changes, in [18] an intuitive, ad hoc approach is taken. However, if modeling a periodic system where the state variable determining state changes is a clock tick, then the Nyquist frequency should be used as a guideline. Finally, another issue that arises in dynamic systems is the temporality of faults. This is because faults may be permanent or they may be transient, manifesting themselves for only a portion of the time observed. Any temporal diagnostic procedure should be able to diagnose permanent failures as well as being able to detect, diagnose and localize transient failures. 2.4 Other Approaches Despite many years of research in model-based diagnosis there have been relatively few practical applications of the approach. In this section two such implementations are examined. The first, developed by Darwiche and Provan [6, 5] of the Rockwell Science Center, has been successfully used to diagnose faults in a small factory system. The second example by Williams and Nayak [32], both associated with N A S A , has actually been launched into space aboard N A S A ' s Deep-Space One. This spacecraft is designed to test several new technologies including on-board diagnosis, which can CHAPTER 2. BACKGROUND 17 detect and recover from failures without the assistance of earthlings. This is an exciting milestone for model-based diagnosis. While both of these approaches have been used to model and diagnose dynamic systems, they still only represent stages in the evolution of model-based diagnosis, not the final result, and their limitations highlight the characteristics that a more general approach should possess. These are the characteristics of declarativeness, where models succinctly describe what the components do, not the process by which they work, expressibility, meaning that the modeling language limitations should not restrict the desired behaviour from being described, composibility, where models can be constructed by mixing together available components, flexibility, meaning that models will not 'break' when confronted with slight variations in input values and finally multi-purposeness meaning that models can be used for other purposes besides just diagnosis. 2.4.1 The Rockwel l Approach The Rockwell approach of Darwiche and Provan is unique in that it uses the topology of the artifact to guide the construction of the system description. This "Structured System Description" or SSD, addresses a major problem with previous approaches which is that the space of possible failures grows exponentially as the complexity of the system description increases. In the Rockwell approach, the SSD is written using propositional logic sentences. However, the manner in which they are combined is constrained by a set of rules; for example, cyclic dependencies are not allowed. The major benefit of this technique is its efficiency, as the diagnosis search space grows only linearly with complexity of the system. This, in turn, allows a set of conflicts (and thus the possible diagnoses) to be generated quickly given an observation. These ideas were first derived for combinatorial systems, but have been extended to model dynamic systems as well. This has been done through the use of the next operator from temporal logic. Wi th this addition the Rockwell approach models dynamic systems as a time sequence of static systems and allows dynamic systems, such as a manufacturing plant, to be modeled and diagnosed. However, even though this approach to model-based diagnosis has been applied successfully, it has several restrictions which can really be viewed as limitations. A n analysis of these limitations and how they constrain the diagnostic process is very useful in identifying and underlining the characteristics that a flexible and powerful, model-based diagnostic procedure should possess. One of the main constraints imposed by the Rockwell approach is that the modeling language is restricted to propositional logic. This limitation on expressiveness restricts the models and increases CHAPTER 2. BACKGROUND 18 the difficulty in accurately representing the artifact with a model. A second limitation is that SSDs represent groups of components and separate SSDs can only be combined through the use of special sensor and actuator objects. This identifies a second desirable quality of a model-based diagnosis procedure, that of composibility. Just as actual components are combined together to progressively build up the artifact, a composible modeling language should allow models to be constructed by combining components together in an arbitrary manner. A third limitation actually stems from the central idea of the Rockwell approach, this being that the system description must be expressed with a specific structure. Creating an SSD means following several rules and guidelines and ignoring these can be "devastating computationally." For example, instead of modeling a 5-input AND-gate with one node and five parents, one should model it as four cascaded 2-input AND-gates [6]. While these rules maintain the efficiency, they can mean that the model may not reflect the true structure of the artifact. A more declarative and expressive approach would maintain an accurate mapping between the model and artifact. Finally, this approach is geared solely towards diagnosis. However, a dynamic model has other applications as well, including simulation, envisionment (hypothesizing possible futures), planning and control. A multi-purpose model-based approach to diagnosis should also support some or all of these features. 2.4.2 The N A S A Approach A second example of an application of model-based diagnosis is Wil l iams' and Nayak's 'Livingstone', the diagnostic component of N A S A ' s Deep Space One (DS1) spacecraft launched in October, 1998. Although it also has modest scientific goals, DSl ' s main purpose is to test and validate several new technologies including an ion rocket engine and an autonomous control system called Remote Agent that is based on artificial intelligence. A key component of the Remote Agent is Livingstone, which uses a single model to perform a variety of functions including: confirming hardware modes, detecting and diagnosing failures and reconfiguring hardware in order to isolate faults and recover from failures. As with the Rockwell approach, models are written using proposition al logic extended with the next operator from temporal logic. However, in this case the artifact is modeled as a transition sys-tem, using two sets of formulae. One set uses state variables to describe feasible states of the system. The other formulae describe possible transitions between the states. These possible state transitions describe both nominal transitions and failure transitions. The nominal transitions describe how the CHAPTER 2. BACKGROUND 19 the system evolves normally and the failure transitions describe possible failures of the components. The occurrence of a fault is then represented by the model taking a failure transition to a failure state. This approach is used to accomplish two main goals of the system, mode identification (MI) and mode reconfiguration (MR) . Mode identification is used to determine the current state of the system, given the last known state and any current observations. This is equivalent to diagnosis because all components that must have failed in order to be consistent with the current observations are identified. Mode reconfiguration is then used to recover from these failures, using the model and the current system state to plan how this recovery should be accomplished. The N A S A approach is more advanced than that of Rockwell, for several reasons. The first is that the models used are more multi-purpose, with the same model being used for a variety of purposes, such as diagnosis and planning. Another is that the models are more expressive, capturing both normal and abnormal behaviour in the form of nominal and failure transitions. The N A S A models have also been extended further by associating probability and cost functions with the transitions. Probabilities are used during mode identification to help identify the most likely mode. Cost functions are used in planning to determine the least expensive route that will move the system to the desired state. However, despite these improvements, the N A S A approach still has several restrictions would not be desirable in a more flexible and powerful approach. The first is the use again of propositional logic as the modeling language. This does not allow the use of defaults and limits the expressiveness of the technique. The second restriction is that sensor inputs must first be preprocessed to translate them into qualitative summaries (such as high, low or nominal) before they can be used by the model. A more flexible approach would allow the raw sensor values to be used directly which still being able to use the data in a qualitative manner. 2.5 Summary These two examples (especially the N A S A approach) are good examples of the state of the art in model-based diagnosis of dynamic systems and they demonstrate that such an approach is indeed practical. However, these approaches still have some limitations which underscore the need for a diagnostic method that is declarative, expressive, composible, flexibile and multi-purpose. A consistency-based diagnosis procedure using tec as the modeling language has the potential to CHAPTER 2. BACKGROUND 20 fulfill these requirements. Tec programs are made up of concurrently executing agents. Each of these agents is a small logic program, able to declaratively model a subcomponent of the artifact. These agents allow models to be written in default predicate logic, greatly improving their expressiveness over simple propositional logic. The agents then run concurrently with the constraints between them determining how they will interact. Agents are self-contained entities and can be mixed in with pre-existing agents, creating an approach where the overall behaviour is a composition of all of the interacting agents. Finally, models created using tec can also be used for simulation, planning and control. Their accurate representation of the underlying structure of the artifact allows models to be used for several purposes. Chapter 3 Modeling, Fault Detection and Diagnosis wi th cc While the previous chapter introduced consistency-based diagnosis and the basic cc programming language separately, this chapter discusses their combination. Consistency-based diagnosis requires models of the systems being analyzed and, as a modeling language, cc has many beneficial properties. This chapter first discusses modeling aspects of cc including using the underlying constraint systems to create component models. Modifying the operation of cc to incorporate consistency-based diagnosis is then discussed including the difficulties encountered when component constraints are lifted. Finally, the addition of fault models to the process is examined. 3.1 Modeling with cc A real-world system, such as an automobile engine, consists of a set of interconnected components all working together whose individual behaviours are governed by both their natural function and their interactions with other components. For example, a gear's natural function is to spin, but when integrated into a larger system this behaviour is constrained by the other gears it meshes with. Similarly, a cc program consists of a set of concurrently executing agents whose behaviour is dictated by their prescribed function (each is a logical formula), and by their interactions with other agents. This natural correspondence between real-world systems and cc programs makes cc highly applicable as a modeling language because agents naturally model system components. When using 21 CHAPTER 3. MODELING, FAULT DETECTION AND DIAGNOSIS WITH CC 22 cc for modeling, an agent's name denotes the component it models and the agent's arguments correspond to the component's input and output port values. The body of each agent then contains the rules describing the corresponding component's behaviour. Agents are denoted by p(X) —»• B where p(X) is the agent's name and parameters and B is its body. A cc language is defined over top of one or more constraint systems which implement the rules for evaluating different types of constraints. Three common constraint systems are clp(7l), clp(J-T>) and clp(B) which define constraints over real numbersj integers with finite domains and booleans, respectively. Constraint systems can also be custom defined for less general domains. One such system is clp(V) which has been designed especially for modeling with cc. Clp(V) defines a set behaviours common to component ports, the terminals that are used to connect components together and which are used to pass data by way of variables. Clp(V) restricts component ports that are connected together to have the same value, just as a wire that connects two electrical components has one and only one voltage. A clp(V) constraint has the form (cr: V) where a is an atom that identifies the port and V is the port's value. Clp(V) constraints are defined by the following two axioms: \/P3V clpp(P : V) ViVVXVY {{clpp(N : X),clpp(N : Y)) -»• (X = Y)) which state that every port has some value and if any two ports have the same name (i.e. are connected), then they will have the same value. Through asks and tells, agents communicate values via clp(V) in a producer/consumer fash-ion. A consumer agent will block on an ask until a producer agent tells the corresponding clp(V) constraint to the store. A clp(V) tell has the syntax '{clpp(.P: V)} ' where P is an existential vari-able representing the port name and V, the port's value, is a variable or a ground term. When this constraint is added to the store then P is replaced by a Skolem constant 1 . A clp(V) ask has the syn-tax ' i f c lpp(P: V) then A' and this creates several possible situations of free and uninstantiated variables. Table 3.1 outlines the rules for possible interactions. In a cc program, clp(V) can be used to mix constraint systems by associating a constraint variable from another constraint system with a port name. Figure 3.2 gives the cc model of the and-or gate shown in figure 3.1. In this example clp(B) is used to model the logic of the gates and 1 A Skolem constant is a ground term used to represent an instance of an existential variable. CHAPTER 3. MODELING, FAULT DETECTION AND DIAGNOSIS WITH C C 23 Clp(V JStore Agent Action [empty] A : if (P:X) then foo(X) Block {a- : 1} A : if (a : X) then foo(X) Launch foo(l) {a :Y} A : if (a : X) then foo(X) Equate X to Y , launch foo(Y) {a : Y } A : if (a : 1) then foo Equate Y to 1, launch foo Table 3.1: Clp(V) Entailment Rules. clp(V) is used to associate these boolean variables with the ports of the components. The clp(B) constraints define the relationship between input and output variables and the clp(V) constraints relate these variables to the gate's ports, ensuring that gates which share ports (i.e. variable Q) hold only one value. Figure 3.1: A N D - O R Circuit. and(A,B,Q)::[Va,Vb,Vq]$( i f ( c l p p ( A : V a ) , c l p p ( B : V b ) ) then ( { c l p b ( V q =:= Va * Vb)}, {clpp(Q:Vq)}) ) • or(A,B,Q)::[Va,Vb,Vq]$( i f ( c l p p ( A : V a ) , c l p p ( B : V b ) ) then ( { c l p b ( V q =:= Va + Vb)}, {clpp(Q:Vq)}) ) . a n d - o r ( A , B , C , Z ) : : [ Q l * ( and(A,B,Q), or(Q,C,Z) ) . Figure 3.2: Cc Program for the A N D - O R Circuit. CHAPTER 3. MODELING, FAULT DETECTION AND DIAGNOSIS WITH C C 24 3.2 Diagnosis of cc Programs In the previous section it was shown that there is a one-to-one correspondence between agents in a cc program P and the components in a modeled system, with agent names representing the components. Recalling from the previous chapter that the components of a system are labeled C O M P S , we label these agent names which represent the components C O M P S ( P ) . Also, in the previous chapter a system description was defined to be a set of constraints that describe each component's correct behaviour. In other words, for each component p's behaviour Bp, the system description contained a clause ->a6(p) —¥ Bp. Therefore, a system description defined by a cc program P is given by: SD(P) = {(P(X) -> (Bp V ab(p))) | (p(X) Bv) £ P } U CS, where CS represents the underlying constraint system. Given these definitions, we can rewrite the definition of consistency-based diagnosis from the previous chapter in terms of diagnosing cc programs. Definition 3.2.1 (Diagnosis of a cc Program) Given a cc program P describing a system (SD(P), C O M P S ( P ) ) , a set of assertions OBS and A C C O M P S ( P ) containing only components assumed abnormal, then V(A, C O M P S ( P ) - A ) is a consistency-based diagnosis for (SD(P), COMPS(P)) and a set of assertions OBS iff SD(P) U OBS U V(A, C O M P S ( P ) - A ) is consistent. This diagnosis V(A, C O M P S ( P ) - A ) is a partitioning of C O M P S ( P ) into two sets: A , which consists of agents representing those components assumed abnormal, and C O M P S ( P ) — A , the remaining agents representing the normally behaving components. Given the definition of SD(P), members of A have definitions of the form ((p(X) —>• (Bp V ab(p))) A ab(p)), or more simply ab(p), and members of C O M P S ( P ) — A have definitions of the form ((p{X) —• Bp) A ->a6(p). Since the predicate ab(-) does not appear in a cc program, checking a hypothesis A for consistency reduces to removing the agents in A from P and testing to see of the remaining agents run to quiescence. In other words, the (possibly conflicting) constraints imposed by the agents in A are lifted by removing the agents themselves from the program P. This new program, P&, is then checked for consistency with OBS. While logically sound, this removal of an agent or agents creates a problem for the consistency checking procedure because it effectively creates a 'hole' in the system description. This gap can prevent values from being propagated forwards and backwards along the remaining constraints. Consider the circuit in figure 3.1 along with the observation A = 1, 5 = 1, C = 0, and Z = 0. CHAPTER 3. MODELING, FAULT DETECTION AND DIAGNOSIS WITH CC 25 A B C Z Figure 3.3: Representation of P A -In checking the diagnosis 2?({A1}, {01}) all constraints on the behaviour of Al are removed, as shown in figure 3.3. In removing these constraints, the relationships between A, B and Q become undefined and this presents a difficulty in determining a value for Q. On the other hand, Q is no longer constrained by Al and can take on any value that is consistent with 01. Therefore, a process of theory completion is required in order to choose a value for Q that will produce a consistency when the remaining agents are run. Theory completion uses the fact that agents in a cc program are written as 'information trans-ducers' in the form of 'if c then A'. These information transducers extract data from the store by asking the constraint c and then processing this information in agent A. Usually A will also tell any results back to the store. In the case above, the agent for Al has been removed from the program which means that there is no agent to tell a value for Q. This causes the agent for 01 to block indefinitely while trying to extract the value of Q with an ask. Theory completion simply involves ie//ing c, the antecedent of the blocked agent. In the case above, the antecedent c lpp(q :Va) from the or agent would be told directly, this is acceptable because Q can have any value due to the fact that Al's behaviour is undefined. In general, it is only possible to use theory completion to tell antecedents which contain unrestrained variables. Unrestrained variables are those port variables of agents that belong to A . Those antecedents of agents which contain unrestrained variables are unrestrained antecedents. Theory completion is the act of ie//ing different combinations of unrestrained antecedents in order to create a consistency. This process of theory completion is very similar to abduction where, given the rule A —> B and the fact B, then A is hypothesized, provided it does not conflict with any other fact. However, with theory completion A is hypothesized without any prior knowledge of the fact B. If this creates an inconsistency then A is retracted, otherwise both A and B are asserted. This is acceptable since with diagnosis we are only checking for consistency. In many cases, there are multiple agents all blocked on the same antecedent c and often times their execution depends on mutually exclusive values for c. In other words, when a value for c is told to the store, all of the blocked agents will unblock but only one will continue, the rest will disappear CHAPTER 3. MODELING, FAULT DETECTION AND DIAGNOSIS WITH C C 26 A X B C D Y Figure 3.4: Delaying Theory Completion. because the constraints in their antecedents have not been satisfied. Therefore, it is not sufficient to simply tell the antecedent of the first agent blocked on c. Instead, all of the antecedents of all blocked agents must be hypothesized in turn until a consistency is found or all possibilities have been exhausted. Besides allowing agents to run to quiescence in order to check for consistency, the theory com-pletion process has a beneficial side effect, it creates explanations for the diagnoses. A n explanation is an instance of ground values for the unrestrained variables which creates a consistency between P A and OBS. For example, in figure 3.3 values for A, B, C and Z are given by the OBS, theory completion then tells an uninstantiated value for Q to the store and this lets the agent for the O R gate proceed, adding a binary constraint relating C, Q and Z to the store. From these values and the relational constraint, a value for Q can be established creating an explanation for the diagnosis V{{Al), {Ol}), namely that A=l, B - 1, and Q = 0. Finally, in theory completion, it is important to delay hypothesizing unrestrained antecedents until all other agents have been evaluated. Consider the circuit in figure 3.4. If, in this example, a2's constraints are lifted (i.e. A = {>12}), then Q is a unrestrained variable and J43'S antecedent is a unrestrained antecedent. If A3 is then evaluated first many different values for Q may be hypothesized until one is found which is consistent with Al. However, if Al is evaluated first, then Q's value can be established right away and consistency can quickly be checked. In more complex systems, the theory completion process may flounder unless remaining agents are first allowed to establish values for as many unrestrained variables as possible before any values are hypothesized. One of the criticisms of consistency-based diagnosis has been that, while it produces diagnoses that are consistent with the observations, many times these diagnoses are highly improbable or physically 3.3 Integrating Fault Models CHAPTER 3. MODELING, FAULT DETECTION AND DIAGNOSIS WITH C C 27 5 Sr WI Bl W2 W3 Bl l Blr W4 B2 W5 B3 W6 B3r Figure 3.5: The Three-Bulb Circuit. impossible. Consider the three lightbulb circuit in figure 3.5, originally presented in [31], along with the observation that bulb 53 is lit, but the other two bulbs, Bl and Bl are not lit . While the most obvious diagnosis is A = {51 ,52} , the consistency-based diagnosis procedure also generates other diagnoses like A = {5, 53}. The explanation for this second diagnosis is that the battery S is faulty While the second diagnosis is extremely unlikely, the diagnostic procedure gives all diagnoses equal weight because it has no knowledge of how components commonly fail. However, for most types of components there are common fault modalities and it is often desirable to include this additional information about component behaviours into the system description. These fault models describe the way components fail and are often assigned probabilities so that the likelihood of failure can be determined for single and multiple faults. One difficulty with fault models, however, is that if all fault modes have not been accounted for, then the diagnostic procedure can fail to find a diagnosis. To deal with this problem, an additional 'unknown fault' model is often introduced [10] which imposes no behaviour constraints and so always succeeds. Diagnosis with cc is able to incorporate fault models in addition to implementing a purely consistency-based diagnosis procedure, permitting the most likely diagnoses to be determined. This section discussed three methods by which this can be accomplished. The first uses a special variable to determine the state of each component in the system. The second method uses consistency-based diagnosis to refine this technique while the third method replaces the normal behaviour with fault behaviour in the component models. because it not producing a voltage and bulb 53 is faulty because it is lit in the absence of a voltage. CHAPTER 3. MODELING, FAULT DETECTION AND DIAGNOSIS WITH C C 28 The first approach requires that the system description be written in a unique way, such that each component's description includes a mode variable. For each component, the value of this variable determines which set of constraints will govern the component's behaviour. If mode = ok, then the nominal set of constraints will be used to give the component its normal behaviour. Otherwise, if mode indicates a fault, then the set of constraints which provide the corresponding fault model's behaviour will be used instead. Figure 3.6 gives the cc agent for a battery component where, if Mode — ok, then the rule which provides normal values for the terminals will be used (energized = 1) but if Mode = dead, then the rules which model the behaviour of a dead battery will be used. b a t t e r y ( M o d e , L e f t , R i g h t ) : : i f (Mode=ok) then { L e f t = l , R i g h t = l } , i f (Mode=dead) then {Left= 0,Right= 0 } . Figure 3.6: Agent for the Battery Component. Wi th this approach, a diagnosis is found by making the mode variable for each component a unrestrained variable. Doing so will cause the diagnosis procedure to use theory completion to hypothesize different values for the components' modes until a set of modes are found where the behaviour resulting from the model is consistent with the observations. When a consistency is found, those components whose modes have been hypothesized as abnormal wil l form a diagnosis. If there is no 'unknown fault' mode, then it is assumed that all fault modes are completely characterized and any diagnoses will actually be abductive diagnoses because the hypothesized modes will then entail the observations. Using this approach, the results of Codognet and Saraswat[4], who used cc to perform abductive diagnosis on the three lightbulb example (see figure 3.5), have been reproduced. However, all combinations of component modes must be hypothesized and their entailment re-lations checked until a consistency is found. This make this first technique of using theory completion to search for combinations of component modes to explain the observations very computationally expensive as the number of possibilities to check is exponential in the number of components and fault modes. In order to improve this efficiency, this technique can be refined by incorporating CHAPTER 3. MODELING, FAULT DETECTION AND DIAGNOSIS WITH CC 29 consistency-based diagnosis. A second, more efficient approach works in two steps. First, consistency-based diagnosis is used to generate candidate diagnoses and then, in a second step, theory completion is performed on the mode variables of the components in these diagnoses in order to discriminate among them. This approach is more efficient because consistency is easier to determine than entailment and the number of fault modes is not a factor in determining the initial candidate diagnoses. This approach can be used without making any changes to the cc diagnosis procedure. In the first step, consistency-based diagnosis is used to generate candidate diagnoses by fixing the value of the mode variable for each component at ok so that only normal behaviour is used. Then, in the second discrimination step, the different failure modes for the components in A are hypothesized while all other modes remain normal. The main benefit of these first two approaches is that they can be accomplished without any change to the cc diagnosis procedure. Unfortunately, the use of a mode variable goes against the spirit of consistency-based diagnosis which uses declarative models of component behaviour. A third approach to integrating fault models requires a slight modification to the cc diagnosis procedure, but allows the use of a normal system description, modeling only normal behaviour. W i t h this approach fault models for the components are stored separately and the normal consistency-based diagnosis procedure is used to generate candidate diagnoses. Then, in a second step, the normal behavioural constraints of the components in the diagnoses are replaced with the fault model constraints, one at a time, until consistency between the fault modes and the observations is found. 3.4 Summary The most important facet of model-based diagnosis is actually developing the models of the artifacts to be diagnosed. Cc makes an excellent candidate for a diagnostic modeling language, in part because its concurrently executing, logic-based 'agents' correspond well to an artifact's subcomponents. Also, because cc works in conjunction with underlying constraint systems, the power of the language can easily be extended by adding new constraint systems. One such custom constraint system is clp(V), which has been added to model component ports. In order to implement diagnosis with cc, the interpreter which executes the cc programs (artifact models) has been modified to incorporate the consistency-based diagnosis procedure. This is done by removing those agents in a candidate diagnosis from the artifact model and then checking to see CHAPTER 3. MODELING, FAULT DETECTION AND DIAGNOSIS WITH CC 30 if the remaining agents will run to completion, thereby determining their consistency. In order to allow this to happen, however, a process of theory completion is required which has the side benefit of providing explanations of the diagnoses in the form of example port values for failed components. Finally, pure consistency-based diagnosis often produces some diagnoses that are useless be-cause they are impossible or extremely improbable. In order to produce more realistic diagnoses, the cc diagnosis procedure can be extended to incorporate fault models. Three possible ways of accomplishing this have been discussed. Chapter 4 Modeling, Fault Detection and Diagnosis with tec In the previous chapter, modeling with cc was first discussed before diagnosis of cc programs was introduced (from a mainly theoretical standpoint) and resulting issues were explored. This chapter follows the same general approach with tec, but also goes into more detail about the practical aspects of tec diagnosis. This is because a physical model of a dynamic system was constructed to evaluate the new ideas presented here. This testbench, a model of a factory built from L E G O is an example of a small, real-time system, and in order to control it, a real-time language was required. As it turned out, it was possible to use tec as this real-time control language. This revealed a new aspect to tec, in addition to its uses for modeling and diagnosis. In the end, all of these aspects come together, as the same techniques that are used for diagnosing the model can also be used to diagnose the control program. In the first section of this chapter using tec to create dynamic models is explored. Tec is much more expressive than cc, both because of its ability to represent time and because of its ability to define default behaviour, and this means that tec models can exhibit complex behaviour. In the second section, a different tack is taken and using tec as a real-time control language is discussed. Within this context, the testbench that was constructed and is controlled by tec, is introduced. The third section then discusses diagnosis of tec programs. After a theoretical introduction, the practical aspects of diagnosing faults in the L E G O factory are discussed. This includes the tec model of the l L E G O is a trademark of the LEGO Group (INTERLEGO AG in the USA). 31 CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 32 artifact, the tec interpreter used to execute the model and the steps required to perform diagnosis. Finally, different modeling techniques and diagnosing faults due to timeouts are discussed before a summary of these ideas is given. 4.1 Modeling in tec As has already been mentioned, tec builds on cc because tec programs run as a series of cc program executions, or iterations. Although each of these iterations is assumed to take zero processing time to complete, it is also assumed that a finite amount of time passes between iterations. In other words, separate cc iterations are temporally distinct and the order in which they occur determines their order in time. This builds the concept of time right into the language because events that occur in subsequent iterations are assumed to happen after events from previous iterations; this allows temporal models to be expressed. Syntactically, this adds the next operator to cc, which defers an agent's evaluation until the subsequent iteration. Models written in tec use the next operator to model temporal behaviour by scheduling certain actions to occur after others. A second improvement over cc is tec's ability to express defaults. Defaults are due to Reiter [24] and allow inferences to be made from missing information. For example, the statement ' i f C else A.' allows the conclusion A to be drawn if C is not detected (not found to be true). Defaults greatly extend the expressiveness of cc because they allow 'catch-all' statements that define behaviour with-out having to enumerate every situation in which it should occur. Defaults add the else statement to the syntax of cc. These new ideas can be demonstrated with a tec model of an inverter. Cc is only able to model an idealized inverter because it can only represent the output occurring simultaneously with the input. However, there will always be some delay between an actual logic gate's input and output and tec can be used to write models that capture this reality. Figure 4.1(a) shows how this is done by adding a delay element, 8, to the output of the idealized inverter causing the result to appear after the input is received. Figures 4.1(b) and 4.1(c) are two different tec models for the inverter that both use the next operator to defer the assertion of the output constraint until the iteration after the input is received. Although they both reproduce the behaviour of the inverter, the models in figures 4.1(b) and 4.1(c) show two alternate approaches to modeling in tec. Figure 4.1(b) uses clp(V) to extract the input value from the store and then uses the boolean constraint system clp(B) to determine the CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 33 h-Out (a) not(In,Out) :: [Val,Neg]$( i f ( c l p p ( I n : V a l ) , c l p b ( N e g =:= = - V a l ) ) then next {clpp(Out:Neg)} ) . (b) not(In,Out) : : ( i f c l p p ( I n : 0 ) then next {clpp(Out 1) } ) • e l s e next {clpp(Out 0) } (c) Figure 4.1: Inverter Wi th Delay negated value of the input. Clp(V) is then used once more to assign a new value for the output port in the next iteration. Figure 4.1(c) uses defaults and ground values for port values to achieve the same effect. In this case, only an input value of 0 can be detected. If the input is 0, then the output will be 1. If the input is not 0, then the default action is taken and the output value will be 0. This demonstrates how defaults can be used to detect the presence or absence of information (in this case the presence of absence of an input value of 0). In later examples, defaults will be used to detect if certain actions have not occurred, allowing timeouts to be detected and diagnosed. This second approach can be also executed more efficiently because it does not require the use of intermediate variables. In both forms of the model, however, clp(V) constraints are used to relate values to ports. This pattern is consistent with all of the component models used in this thesis. This use of temporal models for the logic gates can be used to reveal temporal faults such as glitches when gates are used to build up larger circuits. For example, temporal models of A N D , O R and X O R logic gates were used to construct a model of the full-adder2 shown in figure 4.2. This model was then executed and used to construct the execution trace of table 4.1 which shows how the 2 Unit delay assumed for all gates. CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 34 input values propagate through the circuit. The table gives the values for the three inputs and two outputs of the full-adder as well as for the three intermediate variables p, q and r, all over eight time steps (dashes are indeterminate values). Executing this temporal model reveals a glitch because when the input X changes from a 1 to a 0 in iteration 3, it causes the output to unexpectedly change to 0 in iteration 5 before stabilizing again at 1 in iteration 6. This unexpected behaviour would not be revealed by the purely combinational models of cc. Output Carry Out Figure 4.2: Full Adder with Gate Delays Iteration: 0 1 2 3 4 5 6 7 X 1 1 1 0 0 0 0 0 Y 1 1 1 1 1 1 1 1 Carry In 1 1 1 1 1 1 1 1 P - 1 1 1 0 0 0 0 q - 0 0 0 1 1 1 1 r - - 1 0 0 1 1 1 Carry Out - - 1 1 0 0 0 0 Output - - 1 1 1 0 1 1 Table 4.1: Full-Adder Execution Trace Even though tec can be used to create temporal models, it cannot model systems that vary continuously with time. Instead, tec models behaviour as occurring in discrete time steps, although continuous behaviour can be approximated by making these timesteps arbitrarily short. In other words, tec is a language for describing state machines and executing a tec program is equivalent to navigating a path through the state diagram. In tracing a path through the state diagram, each cc iteration determines what the present state is and what the next state will be. The current state is maintained until the start of the next CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 35 State A State B State C initial state is A next state is B next state is C time interval between iterations cc iteration execution time (-0) Figure 4.3: Relationship Between States and Tec Rerations iteration when an instantaneous transition occurs and the previous next state becomes the current state. This relationship is shown in figure 4.3. State variables determine what the present state is and, in the context of tec, these are variables that span iterations through the use of the next operator. These ideas can be demonstrated by using tec to model an actual state machine implemented in digital logic. Figure 4.4(a) shows the state diagram where the different arcs between states correspond to different input values. Figure 4.4(b) shows the realization of this state diagram using J K flip-flops. This circuit creates a state machine because the outputs are fed back into the inputs and the flip-flops are clocked. The clocking prevents the inputs from immediately affecting the outputs, creating a series of discrete states. The tec model of this circuit is shown in figure 4.5. In the tec model, the clock is inherent in the execution of the program with one clock tick per iteration. In this model the current state is determined by reading in the values for the two state variables, QO and Q l . Then, after reading in these values, the next state is determined from the current state and the input I, this is then told with a next. The outputs are fed back into the inputs by assigning the next values for the state variables QO and Q l at the end of an iteration and then reading these values in again at the start of the subsequent iteration. The initial state of the system is set with an asynchronous clear which immediately clears both flip-flops. This state machine can now be used for other purposes besides simply modeling. Of course the main objective is to use it for diagnosis, but planning is also possible. In planning, an initial state and a desired final state are known ahead of time and the task is to determine what inputs will take the system from the initial state to the desired final state. This can be accomplished by making the CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 36 input variable / a unrestrained variable. The tec interpreter will then use theory completion to find a sequence of Is that produce a path through the state machine that is consistent with the desired states. Input = 1 I FFO on J Q K (a) (b) Figure 4.4: State Diagram and Digital Circuit 4.2 tec for Real-Time Control In this section, using tec as a real-time control language is discussed. The first part of this section looks at how this has been accomplished and the second part introduces the artifact that was built and which tec was used to control. 4.2.1 Tec as a Con t ro l Language Although tec has been presented so far as a modeling language, it is also highly applicable to controlling real-time systems. This is because tec exhibits all of the characteristics of a class of languages known as synchronous reactive languages. The properties of these languages, which include E S T E R E L [1, 21], S I G N A L [14] and L U S T R E [16] are first discussed in this section before it is shown how a slight modification to tec allows it to be used for writing real-time control programs. Synchronous reactive languages have been developed for controlling reactive, real-time systems. A reactive system, as its name implies, reacts to inputs from the external environment, generating outputs in response. After responding to those inputs it quiesces and waits for the next inputs to occur. This produces an environment where computation occurs in bursts of activity followed by periods of inactivity. The computational model for this reactive approach to control is based upon CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH TCC 37 not(1,0) :: ( i f clpp(I:0) then (clpp(0:l)} else {clpp(0:0)} ) . or(A,B,Q) :: ( i f clpp(A:l) then {clpp(Q:1)) else ( i f clpp(B:l) then (clpp(Q:l)} else {clpp(Q:0)} ) ) • % J and K are the two inputs, % C i s asynchronous clear, Q i s output jkff(J,K,C,Q) :: i f clpp(C:l) then ( (clpp(Q:0)}, next {clpp(Q:0)} ) else( i f clpp(Q:0) then ( i f clpp(J:0) then (next {clpp(Q:0)}) else (next (clpp(Q:l))) ) else ( i f clpp(K:l) then (next (clpp(Q:0)}) else (next {clpp(Q:l))) ) ) • % d e f i n i t i o n s for the system components invl(1,0) :: not(1,0). orl(A,B,Q) :: or(A,B,Q). or2 (A, B, Q) : : or (A, B, Q) . ffO(J,K,C,Q) :: jkff(J,K,C,Q). ffl(J,K,C,Q) :: j kff(J,K,C,Q) . abvars(_). % d e f i n i t i o n for the system state_machine(I,C,QO (Ql) :: [Ibar,JO,Jl,Kl]~( orl(I,Q1,JO), i n v l ( I , I b a r ) , or2(QO,Ibar,Kl), ff0(J0,I,C,Q0), ffl(J1,K1,C,Q1), ( c l p p ( J l : l ) } Figure 4 . 5 : Tec Model of the State Machine CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 38 the hypothesis of Perfect Synchrony [28] which assumes that the system's reaction to a set of inputs (including the detection of missing information) is instantaneous. Also, with synchronous reactive languages only deterministic behaviours are allowed to be specified, meaning that the same set of inputs will always produce the same outputs. This gives the language a precisely defined semantics and allows for formal verification of programs. In effect, synchronous reactive languages describe deterministic finite state machines. In these state machines each computation phase is a state change triggered by the inputs, with the output values being determined by those inputs and the current state. Computation quiesces when a new state is reached with the transitions between states assumed to be instantaneous. In reality, there will always be some delay between the inputs being received and the outputs being produced but the hypothesis remains valid so long as the environment does not change while reactive outputs are being calculated [21]. The minimum time over which the controlled system can be guaranteed to remain constant creates an upper bound on the computation time of a reaction. Synchronous reactive languages offer an attractive approach to describing finite state machines because they describe their behaviour, not their structure. This is useful because small changes in the logic of a program can drastically change the structure of a finite state machine that implements it, making state machines very inconvenient to program directly [21]. One characteristic that all but the simplest of real-time control systems must posses is the support of concurrent processes. This is because complex real-world systems contain many elements all working in parallel and, most often, these separate elements need to be controlled simultaneously. Wi th a conventional real-time system running on a single processor this means that these concurrent processes must share the processor through multitasking. Usually, this creates a high degree of nondeterminism with respect to how these processes are interleaved and this can lead to faults that are spurious and difficult to diagnose. Synchronous reactive languages, on the other hand, support the notion of logical concurrency where the different tasks that control the separate components of the system are written as concurrent processes. However, although the control structures are written as parallel functions, together they form a single, deterministic finite state machine removing any nondeterminism. Furthermore, the safety of these logical processes and maximum reaction times can be analyzed apriori. However, the most important feature of these languages is their inherent assumption that computation occurs in phases of activity followed by periods of quiescence that are nonzero in length. This computational model builds the concept of time into the language structure because CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 39 events that occur in separate computational cycles are assumed to be separated in time. This means that an event can be programmed to occur in the future simply by scheduling it for an upcoming cycle. As stated above, a computation cycle occurs as a reaction to an external input and if this input is the tick of a clock then computation cycles can be made to occur at regular intervals. This makes it possible to precisely schedule when this future event will occur. This is known as a "control-based" paradigm [33] distinguishing it from the more common "data-oriented" style usually found in real-time programs. In the data-oriented style time is represented using a variable that stores the value of a timer, whereas with the control-based approach the passage of time is built right into the structure of the program. The control-based approach simplifies timing analysis of the program because it can be done statically. Wi th the data-oriented approach, timing analysis is much more difficult because the passage of time can only be viewed by executing the program and tracing its behaviour. Synchronous reactive languages interact with the real world through the use of signals and drivers. Signals are abstract data entities which require drivers to be transformed into physical signals for outputs or to be generated from physical signals in the case of inputs. Signals are also used to communicate among concurrent processes during an execution cycle. When interfacing to the real world, it is important that any external action caused by a signal have a bounded response time in order to remain consistent with the synchrony hypothesis. In this framework clock ticks are just another signal and multiple clocks producing multiple signals are possible, therefore a multiform notion of time is supported. Tec shares an almost identical structure with these synchronous reactive languages. It too executes in alternating phases of activity and quiescence and supports the notion of logical concur-rency. The only modification required to be made to tec in order to allow it to be used for real-time control is the addition of signals and a driver. In order to add signals to tec, a new custom constraint system, clp(S), was created. Clp(S) works very much like clp(V), where constraints interact with the store through asks and tells, except that non-ground values are not allowed to be told. Also, clp(S) communicates with the outside world through the use of a customized driver. When clp(S) signals are told to the store, they are intercepted by the driver which performs the desired real-world action. The driver also translates external events into clp(S) signals that are automatically added to the store, possibly triggering other actions to occur. This behaviour is shown in figure 4.6. Even with the addition of clp(S), there is still one difference between tec and other synchronous reactive languages: external signals cannot cause tec computation cycles. However, this difference is due to CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 40 clp(S) driver tec control program Physical Output Signal Physical Input Signal Clp(S) tell • Clp(S) ask Clp(S) Constraint Clp(S) Constraint Constraint Store Figure 4.6: Interaction Between clp(S), tec and the Driver implementation decisions, not fundamental differences between the languages, and is dealt with by allowing the tec computations cycles to run freely and continuously. This creates a situation identi-cal to the cycles being triggered by an external clock with a frequency higher than but synchronized with all other external clocks. Clp(S) signals have the form clps(S:V) where S is the name of the signal and V is a legal value for that signal. The names and values for clp(S) signals are application dependent and relate to the system being controlled. For example, a system with a conveyor belt might have a signal name of conveyor and possible conveyor values of on, off, forward and reverse. Turning the conveyor on is then simply a matter of telling the constraint {clps(conveyor:on)}. As explained in chapter 2, tec combinators are constructs, written in tec itself, that extend the expressiveness of the language. The parallel nature of tec and the expressiveness of these combinators combine to create a powerful, declarative control language. Two combinators that are especially useful for real-time control are the delay and the watchdog combinators. The delay combinator next(Num, Signal) do A will execute agent A after there have been Num occurrences of the signal Signal. The watchdog combinator watchdog (Count, Trigger, WatchedFor, Recover) will abort the current execution path and replace it with the exception handler agent Recover if the constraint WatchedFor is not detected within Count occurrences of Trigger. Figure 2.4 showing CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 41 the combinator definitions is reproduced in figure 4.11 with the additional definitions of these new combinators. As an example of using the delay combinator, figure 4.7 shows a motor-controlled pneumatic valve that is used to control the movements of an pneumatic piston. The motor moves the valve lever up or down to direct the airflow into outlets A or B . Figure 4.8 shows an agent that will cause the lever to move from position A to B , using a clp(S) constraint to turn the valve motor on. The delay combinator is then used to ensure the the motor is turned off again five clock ticks later. The watchdog agent is used for implementing timeouts. A timeout is the detection of an error due to an expected event not occurring. This detection of missing information is accomplished using tec's else operator, which implements default logic. For example, the conveyor belt shown in figure 4.9 moves the token to left, using the three sensors to detect it passing underneath. The conveyor is then supposed to turn itself off a short time after the token has passed underneath the last sensor. If however, due to some failure, the last sensor never detects the token, the conveyor will never shut off. This potential problem is handled with a watchdog which knows that the token should be detected within a set time of it being placed on the conveyor. If the token is not detected, the watchdog shuts down the conveyor and reports and error. Pneumatic Valve Figure 4.7: Motor-Controlled Pneumatic Valve v a l v e l _ d o w n { d p s ( v a l v e l : on) }, n e x t ( 5 , c l p s ( t i c k : o n ) ) do { c l p s ( v a l v e l : o f f ) } . Figure 4.8: Tec Agent For Switching Pneumatic Valve Position CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 42 Use of the watchdog combinator is shown in figure 4.10. The agent wa tched_conveyor simul-taneously launches the c o n v e y o r agent, which runs the conveyor belt, and the watchdog combinator which will invoke an exception handler if a timeout occurs. The c o n v e y o r agent turns on the con-veyor and waits until the token is detected under conveyor_sensor3. When the token moves under the sensor, the sensor will have the value on. The agent then waits for five more clock cycles before turning the conveyor off in order to make sure that the token has had enough time to travel past the end of the belt. If, however, there is a problem and the token is never sensed by conveyor_sensor3, then the conveyor belt will never shut off. This possibility is handled by the watchdog agent which will execute the exception handler agent h a l t if the token is not detected by conveyor_sensor3 within 35 clock ticks of the watchdog being launched. The exception handler is run by means of a call to a b o r t 3 which removes all agents currently being processed before launching the exception handler agent with an empty store. In this case, the agent h a l t simply turns the conveyor belt off. conveyor_sensor3 conveyor_sensor2 conveyor_sensorl D o n token Figure 4.9: Factory Conveyor Belt conveyor :: do { d p s (conveyor: on) } wa t c h i n g c l p s ( c o n v e y o r _ s e n s o r 3 : o n ) timeout ( do { c l p s ( c o n v e y o r : o n ) } w a t c h i n g c l p s ( c o n v e y o r _ s e n s o r 3 : o f f ) timeout ( n e x t ( 5 , c l p s ( t i c k : o n ) ) do { c l p s ( c o n v e y o r : o f f ) } ) ) . watched_conveyor :: conveyor, watchdog(35, c l p s ( t i c k : o n ) , c l p s ( c o n v e y o r _ s e n s o r 3 : o n ) , h a l t ) , h a l t :: { c l p s ( c o n v e y o r : o f f ) } . Figure 4.10: Tec Agent For Controlling the Conveyor Belt 3See chapter 2 for the syntax of abort. CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 43 always(A) :: A, next a l w a y s ( A ) . whenever C do A :: i f C then A e l s e next (whenever C do A ) . do P w a t c h i n g C :: i f C e l s e (P, next (do P w a t c h i n g C ) ) . do P w a t c h i n g C timeout B :: i f C then B e l s e (P, next (do P w a t c h i n g C timeout B ) ) . n e x t ( 0 , _ S i g n a l ) do A :: A. n e x t ( N , S i g n a l ) do A :: [ N l ] ~ ( i f ( S i g n a l ) then ( { c l p f d ( N l \#= N - l ) } , next n e x t ( N I , S i g n a l ) ) e l s e next ( n e x t ( N , S i g n a l ) do A ) • watchdog(Count, T r i g g e r , WatchedFor, Recover) :: [ V ] A ( {clpp(V:Count)}, do ( d e c _ n _ t e s t ( V , T r i g g e r , R e c o v e r ) ) w a t c h i n g WatchedFor ) . % t h i s agent used by watchdog d e c _ n _ t e s t ( V , T r i g g e r , R e c o v e r ) :: [CV,NV]$( i f T r i g g e r then ( i f ( c l p p ( V : C V ) , c l p f d ( C V - l #= NV)) then ( i f c l p f d ( N V #= 0) then a b o r t ( R e c o v e r ) e l s e next {clpp(V:NV)} ) ) e l s e ( i f clpp(V:CV) then next {clpp(V:CV)> ) ) . Figure 4.11: Definitions of tec combinators CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 11 4.2.2 The Art i fac t This thesis presents a new approach to diagnosing faults in dynamic systems and, in order to verify and evaluate these ideas, a physical model of a dynamic system was constructed as a testbed. The artifact in this case consists of a computer-controlled model of a manufacturing plant, built from L E G O . Constructing a physical model allowed tec to be tested as the control language for the system and a manufacturing plant was chosen for a testbed because this type of engineered system is an important potential application for model-based diagnosis. Manufacturing plants are dynamic systems usually with many concurrently operating but interdependent components. Furthermore, it was felt that by constructing a physical model, as opposed to writing a computer simulation, insight could be gained into the difficulties that would be encountered if these ideas are implemented in an industrial setting. This section gives a overview of the L E G O factory.4 Figure 4.12: The L E G O Factory The model that was constructed simulates an environment where objects are moved around a factory in order to be processed at different stations. In the L E G O factory these items are represented by tokens which circulate in a loop composed of a conveyor belt, a mechanical arm and a motorized 4 A note about the terminology used: the LEGO model of the manufacturing plant will be called the LEGO factory and the tec model of the plant which forms the SD will be called the tec factory model. CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 45 cart. The cart brings the tokens into position under the mechanical arm, which picks them out of the cart and places them at the start of the conveyor belt. The tokens then travel along the conveyor, passing under three different sensors along the way. These sensors are able to detect the presence of a token directly beneath them. When the token reaches the end of the conveyor, it slides back into the cart, which has travelled back down to receive it. Figure 4.12 shows a photograph of the testbed that was built and the diagram in figure 4.13 gives its structure. Signal Allowed Values tick [on, off] con veyor_sensor 1 [on, off] conveyor_sensor2 [on, off] conveyor_sensor3 [on, off] arm_yaw .sensor [on, off] cart_top-sensor [on, off] cart_bot_sensor [on, off] conveyor [on, off, forward, reverse] cart [on, off, forward, reverse] valvel [on, off, forward, reverse] valve2 [on, off, forward, reverse] arm-yaw [on, off] electromagnet [on, off] Input Signals Output Signals Table 4.2: Valid Signals and Values for the L E G O Factory Movement in the L E G O factory is controlled by five D C motors and the current state of the system is detected with six sensors. One of the motors moves the conveyor belt, one moves the cart and three are used to operate the mechanical arm. The arm is the most complicated component in the system, consisting of two links each actuated by a pneumatic piston. Each piston is controlled by a pneumatic valve that is opened or closed with an electric motor; depending on the the valve position, the piston is either extended or retracted. The third motor connected to the arm is used to swivel it around its vertical axis. As mentioned previously, three of the six sensors are used to detect the token on the conveyor belt. The other three are used to detect motion limits. Two of these sensors are needed determine when the cart has reached its top or bottom limits of its travel and one is used to detect if the arm is positioned over the cart or the conveyor belt. Table 4.2.2 shows the clp(S) signals used to interface with the L E G O factory. The motors of the model are controlled by a Motorola 68HC11EVB microcontroller board running a small monitor program. This program implements several low level commands that the CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 46 Top View Sensor inputs Control Outputs! MC68HCI1EVB Controller Board clp(S) J tec Driver / ' Control \ Program Sensor Side View Figure 4.13: Diagram of the Model Factory driver, connected to the tec control program, issues to control the system. The controller board also samples all of the sensor inputs on command and returns the sensor readings to the driver. Communication between the controller board and the driver occurs over a serial interface. Table 4.3 shows the low level commands implemented by the controller board. As discussed in section 4.2.1, tec is used to control the operation of the factory. Clp(S) tells are converted by the driver software into commands that are sent to the controller board. Similarly, at the start of every iteration of the tec interpreter, the driver polls the controller board for the current state of the system and then adds this information to the store as a set of clp(S) constraints. For example, the tec control program fragment: { c l p s ( c a r t : o n ) } , whenever c l p s ( c a r t _ t o p _ s e n s o r : o n ) do { c l p s ( c a r t : o f f ) } . adds the clp(S) constraint c a r t : o n to the store and the driver code simultaneously sends the appro-CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 47 priate command ( M O N 2 1) to the controller board which then turns on the cart motor. Then, in ev-ery iteration, the driver will add the current state to the store. This will add clps (cart _top_sensor: of f) to the store, assuming the cart has not yet reached its upper l imit. When the cart does reach it, clps(cart_top_sensor:on) will be added, causing the {clps(cart:off)} command to be issued which stops the cart's motor. As time progresses, a sequence of stores is produced which traces the execution of the program and captures the evolving state of the L E G O factory. This execution trace is used to provide the observations. Commands Parameters Comments V E R Print version of the software M O N < m > < 11 0 > Turn motor m on or off M D I < m > < 11 0 > Set motor m to run forward or backward M D C <m> <HHHH> Set motor m's duty cycle (speed) R I N Return the 8 sensor inputs as a byte value ! Return the 8 sensor inputs as 2 ascii characters (nibbles) QUI Exit monitor where <m> is a motor number from 1-8 and <HHHH> is a 16 bit hexadecimal value. . Table 4.3: L E G O Controller Commands 4.3 tec Diagnosis This section discusses using tec to perform diagnosis from two standpoints. In the first subsection a theoretical definition of tec diagnosis is presented. These concepts are then applied to the practical example of the L E G O factory. This example requires a system description and a tec interpreter in combination with the diagnosis procedure in order to locate the faults. These aspects are discussed separately. 4.3.1 Diagnosis of Tec Programs In section 3.2 the logical foundations for the consistency-based diagnosis of cc programs were de-scribed. Essentially, a failure was detected when the observations conflicted with the behaviour predicted by the cc model. A diagnosis for the failure was then a labeling, ab(c) or -*ab(c) of all components in the cc model, which would restore consistency with the observations. It was then shown that labeling a component c as abnormal (i.e. ab(c)) was equivalent to removing the compo-nent's agent from the cc system model. The consistency of the remaining agents was then checked CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 48 through a process of theory completion. In this section these ideas are extended to deal with the more complex nature of tec. With tec, programs are executed as a series of cc iterations and this affects their diagnosis in two main ways. First, a series of observations are required, with a unique set of observations applied to each iteration. This does not mean that observations are required for every iteration, as some states in the state machine may not have any observable variables, but the more observations there are, the more specific the diagnoses will be, and if there are no observations then a conflict can not be created. However, only one observation in one iteration is required to create an inconsistency. This means that in order for there to be consistency between the model and the observed behaviour all observations in all iterations must be consistent with the expected behaviour. The second difference between cc and tec diagnosis is that tec programs can schedule agents for evaluation in future iterations. The act of deferring an agent's execution will never raise an inconsistency in itself, so if this delayed agent does create a conflict, it wil l only happen in the iteration in which the agent is evaluated. This makes diagnosis more computationally expensive because the event of scheduling the agent is separated from the time in which the agent is evaluated. In this case, the diagnosis procedure may need to backtrack across iterations in order to resolve the conflict. Therefore, the fact that agents can be delayed with tec means that for diagnosis there are two classes of assertions: external observations and internally generated agents that have been carried forward. A n error is detected if either of these creates a conflict. As with cc, a model written in tec consists of a set of agents, each representing the constituent components of the artifact being modeled. This program is labeled P and the components it models are C O M P S ( P ) which are defined over an underlying constraint system CS. Each agent in P has the formp(X) —> Bc where p(X) is the agent's name and parameter list and Bc is its normal behaviour. In defining the system description, the behaviour is augmented with the distinguished predicate ab(-) which removes the normal behavioural constraints, Bc, if a component is labeled abnormal. Therefore, the system description is defined as before as SD(P) = {(p(X) -r (Bc V ab{p))) | (p(X) -r Bc) £P}U CS. The difference from section 3.2 is that this same SD(P) is used repeatedly over several iterations. The initial iteration is labeled a and the final iteration 5 . In each iteration i, a set of assertions is presented. These assertions are made up of the external observations OBSi as well as STATEi, 5 This notation is due to [12] CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 49 the agents carried forward from previous iterations (either of which may happen to be null for some or all iterations). This leads to the definition of a tec diagnosis, which will restore consistency over all iterations i £ [a, fi]. D e f i n i t i o n 4.3.1 (Diagnosis of a Tec P r o g r a m ) Given a tec program P composed of a set of components C O M P S ( P ) , a system description then says that each component behaves either nor-mally or abnormally. Given a set of components A C C O M P S ( P ) containing only components as-sumed abnormal, then V(A, C O M P S ( P ) - A ) is a consistency-based diagnosis for SD(P), C O M P S ( P ) and a set of assertions OBS iff n SD(P) U V ( A , C O M P S ( P ) - A ) U /\{OBS{ U STATER is consistent. Where the STATEj assertions are a function, T, of the previous iterations, defined as: STATEa = 0 i - l STATEi = T( SD(P) U V(A, C O M P S ( P ) - A ) U f\(OBSj U STATEj)). The function realized by T is determined by the operational semantics of the interpreter. This diagnosis V(A, C O M P S ( P ) - A ) is a partitioning of C O M P S ( P ) into two sets: A , which consists of agents representing those components assumed abnormal, and C O M P S ( P ) — A , the remaining agents representing the normally behaving components. Given the definition of SD(P), members of A are of the form ((p(X) —>• (B V ab(p))) A ab(p)), or more simply ab(p), and members of C O M P S ( P ) — A are of the form {(p(X) —> B) A->ab(p). Since the predicate ab(-) does not appear in a tec program, checking a hypothesis A for consistency reduces to removing the agents in A from P . This is done for all iterations i £ [a, fi]. The remaining agents in P A are checked for consistency using the same theory completion process as with cc. Recalling from chapter 3, the removal of the agents in A from P has the potential to cause some or all of the remaining agents to block indefinitely. This is because the asks in the remaining agent's antecedents can only be satisfied by the agents that have been removed from the program. Wi th theory completion, constraints that these asks are blocked on are simply told to the store, allowing the blocked agents to continue. This is acceptable because the agents in A can behave in any manner, including telling the constraints needed to allow the remaining agents to run to completion. CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 50 Wi th tec, the removal of agents from P in every iteration creates a large hole in the system description. This means that observations become very important in supplying missing information about behaviour. Also, just as with cc, theory completion creates explanations by supplying input and output values for the components that form the diagnoses. However, with tec a series of explanations are generated which can be used to characterize a failure more completely than with cc. 4.3.2 The System Descr ip t ion arm) Token_pos ) :: { clpp(Token_pos:0) }. sensor(Pos, Token_pos, Sensor) :: i f clpp(Token_pos:Pos ) then {• clps(Sensor:on) } else { clps(Sensor:off) }. belt( Token_pos ) :: [CP, NP]$( i f (clpp(Token_pos:CP), clpfd(CP+l #= NP)) then next { clpp(Token_pos:NP) } ) • sensorl(Tp,S) sensor2(Tp,S) sensor3(Tp,S) sensor(l ,Tp,S). sensor(3,Tp,S). sensor(5,Tp,S). factory(Tp) :: ( arm(Tp), for(6) do ( belt(Tp), sensorl(Tp,conveyor_sensorl) sensor2(Tp,conveyor_sensor2) sensor3(Tp,conveyor_sensor3) ) ) . Figure 4.14: Lego Factory System Description in tec Figure 4.14 shows a simple tec model of the factory used as the system description for diagnosing the L E G O factory. Of all the components of the factory, only the actions of the arm and the conveyor belt (with its sensors) are modeled. The model, therefore, has 5 agents representing the arm, the conveyor and the three sensors. A sixth agent named 'factory' models the whole system by grouping CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 51 the components together and connecting them by way of the variable Tp, which holds the current token position. This tec program models the system as a state machine with six states, as is shown in figure 4.15. When the factory agent is first run, the arm agent sets Tp to 0, modelling how the token is placed at the start of the conveyor; this initializes the state machine. The three sensor agents and the belt agent are then run for six iterations. In each iteration the sensors return (by adding constraints to the store) whether the token is underneath them. The belt agent moves the token forward one position at a time by scheduling the next token position to be one greater than the current position. Sensor 3 0 Sensor 2 Sensor 1 a token X) r 3 'h x \ to Figure 4.15: The Six States of the Lego Factory 4.3.3 The tec Interpreter The tec interpreter is a Prolog program that executes the tec models. The interpreter is invoked in the Prolog environment with the command tcc(+Initial.agent, -BL, +Delta, -Expl) where the inputs are InitiaLagent (which is not only the initial agent but also the observations) and Delta, a list of agents whose constraints are to be suspended. Delta is empty when performing consistency checking or when simply using the model to simulate the system. The interpreter returns two lists: the blocked list, B L and the list of explanations. The blocked list contains the agent(s) whose antecedents could not be satisfied when the program reached its final resting point. The explanations are the agents in Delta whose unrestrained port variables have been instantiated with those values or constraints that allowed the models to continue. The interpreter maintains three queues of agents as it runs. These are the 'ready-to-run' queue (RTR), the 'blocked' queue (BL) and the the 'nexts' queue (Next). The R T R contains agents in the current iteration that are waiting to be evaluated. The B L contains agents which have been evaluated, but whose antecedents are not satisfied (and have not been proven false) by the store; CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 5 2 whenever new information is added to the store, the agents on B L are reevaluated to see if they can proceed. The Next list contains agents whose execution has been deferred until the subsequent iteration through the use of the next operator. The interpreter also maintains the store for each iteration. When quiescence is reached, the store is cleared in preparation for the next iteration but it is also saved in a history list. This history of past stores allows the evolving state of the system to be examined. The interpreter continues to execute new iterations as long as there are agents in the Next list. When this list is empty, the final iteration will be executed and the interpreter will return 'yes' if it reaches its final resting point. If, however, the interpreter encounters an inconsistency, it will then backtrack, finally aborting execution and returning 'no' if it is unable to find a consistent store. This is how consistency between the system description and the observations is checked. For example, to simply simulate the system, the interpreter is invoked with the system model, but without any observations. Delta is set to null because all components from the model should be used in simulation. In the case of the tec model from figure 4.14 the interpreter would be invoked with t c c ( [Tp] A f actory(Tp) , BL, [ ] , Expl ) . Since the variable T p is not formally declared in the program, it is declared here as an existentially quantified variable since it names a port. Figure 4.16 shows the results of executing the above command. The model runs for six iterations before terminating successfully. The store is then printed out allowing the execution trace to be viewed. Although there is only one store, the clp(S) and clp(V) constraints are shown separately to improve readability. The sequence of clp(V) stores trace the path of the token through the system. In figure 4.16, s i is the Skolem constant representing the existential variable Tp and the current value of this port variable is given on the left side of the '—' in every iteration. On the right side of the '—' are the negated values. For instance, at t(0) - ( s l : l ) means that the value of Tp (represented by si) was not 1 in iteration 0. This constraint is due to sensor 1 which did not sense a token beneath it in the initial iteration. Similarly, the sequence of clp(S) constraints trace the expected sensor readings produced by the simulation. 4.3.4 The Diagnosis Procedure Figure 4.17 gives an overview of the procedure used to diagnose faults in the the L E G O factory. The first step in the process is to actually run the system in order to collect the observations. This CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 53 ?- tec([Tp]"factory(Tp),B1,[],Abd). * * Begin Iteration: 0. * * Begin Iteration: 1. * * Begin Iteration: 2. * * Begin Iteration: 3. * * Begin Iteration: 4. * * Begin Iteration: 5. * * Begin Iteration: 6. * * Termination at T= = 6 Contents of clp(P) stores: t(0) Store: (si 0) 1) " ( s i 3) t d ) Store: (si 1) " ( s i 3) " ( s i 5) t(2) Store: (si 2) " ( s i 1) " ( s i 3) t(3) Store: (si 3) " ( s i 1) " ( s i 5) t(4) Store: (si 4) " ( s i 1) " ( s i 3) t(5) Store: (si 5) " ( s i 1) " (si 3) t(6) Store: (si 6) --' ( S l : 5 ) ' ( s l : 5 ) clp(S) stores: (conveyor_sensorl:off) (conveyor_sensor2:off) (conveyor_sensor3:off) (conveyor_sensorl:on) (conveyor_sensor2:off) (conveyor_sensor3:off) (conveyor_sensorl:off) (conveyor_sensor2:off) (conveyor_sensor3:off) (conveyor_sensor2:on) (conveyor_sensorl:off) (conveyor_sensor3:off) (conveyor_sensorl:off) (conveyor_sensor2:off) (conveyor_sensor3:off) (conveyor_sensor3:on) (conveyor_sensorl:off) (conveyor_sensor2:off) Bl = [], Abd = [] ? yes I 1-Contents of t(0) Store: t d ) Store: t(2) Store: t(3) Store: t(4) Store: t(5) Store: t(6) Store: Figure 4.16: Simulation Output is done by using the tec control program to run the model and then saving the history of stores produced. However, at this point there is a conceptual mismatch between the actual system and the predicted behaviour as the tec factory model represents the system as having only six states while the control program produces a history of many more that six states. This is because the tec model uses the token position to trigger state changes, while, with the tec control program, state changes are due to the tick of a system clock. For example, as the tec control program runs freely, it may collect ten samples while the token moves through position rO and then, when the token triggers sensor 1, 10 more samples of c lps(conveyor_sensorl :on) as the token moves under the sensor. To overcome this problem, the store is filtered by extracting just those six observations which coincide with the token positions shown in figure 4.15. Once the appropriate observations have been extracted, they are combined with the tec factory CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 54 tec Control Program Lego Factory Model (Artifact) Sequence of Stores (Unfiltered OBS) f i 1 Filtered Store (OBS) t e r Hypotheses Generator Results tec Interpreter Figure 4.17: Overview of the Diagnostic Process model and checked for consistency using the tec interpreter. If the model does not predict any behaviour that conflicts with the observations, then correct behaviour of the system is verified. On the other hand, if the interpreter returns "no", then there is an inconsistency. In this case, the hypothesis generator will re-run the interpreter with different combinations of agents in Delta in order to find which combinations of agents will remove the conflict. Although the tec factory model uses just six states to represent the physical system, it is still possible to find meaningful diagnoses with this simplification. When performing diagnosis, the observations are expressed as part of the initial agent. Thus the initial agent is literally SDUOBS. For example, if at time T3 the observation for sensor2 was conveyor_sensor2: of f then this would be added to the invocation with the call tcc(([Tp]"(factory(Tp), next next next {clps(conveyor_sensor2:off)}), BL, [], Expl). The three nexts delay the observation from being evaluated until time T3, at which time it conflicts with the predicted behaviour of conveyor_sensor2: on. This inconsistency causes the interpreter to eventually return "no". Having detected a fault, the hypothesis generator then re-runs the interpreter with different combinations of components in Delta in order to diagnose the problem. The above example shows that it is not necessary that there be an observation for every state. In fact, just one observation can cause a conflict. However, more observations will usually refine the diagnoses. If the only observation available is that at time T 3 sensor_2 is off, then there are three possible single faults that would explain this: either the arm is broken and so it never placed the token on the conveyor, or the conveyor is broken and it never moved the token into position under sensor_2 or sensor_2 itself has failed and does not detect a token that is actually there. If we then add the observation that a token was also detected by sensor_3 at time T5, then this rules out the CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 55 arm and the belt as diagnoses because the token must have been placed on the conveyor and moved to position 5 in order to have been detected by sensor_3. 4.4 Active and Reactive Models In the previous example of diagnosing the L E G O factory, the model used (figure 4.14) represented the behaviour of the entire system, both the artifact and the controller. This type of model is termed an 'active' model because it models the system and how it operates together. It is also possible to write 'reactive' models. Reactive models are meant to represent only the behaviour of the physical system, including how it responds to commands from the controller. Besides responding to the same clp(S) control signals that the physical model responds to, it also emulates the sensor signals produced by the artifact. These outputs will likely trigger new commands from the control program. Thus, in the case of reactive models, the system description is the conjunction of both the model and the controller (SD = Reactive Model U Controller). For example, the agent in figure 4.18(a) is an active model for the arm component of the factory. This is because the agent embodies the controller as well as the arm and it sets the token position to 0 as soon as it is run. Figure 4.18(b), on the other hand, is a reactive model. It 'reacts' to the controller signal c l p s ( e l e c t r o m a g n e t : o f f ) by setting the token position to 0 and it keeps it at 0 until the controller turns on the conveyor with the signal c l p s ( c o n v e y o r : o n ) . active_arm(Token_pos) : : {clpp(Token_pos: 0 ) } . (a) Active passive_arm(Token_pos) :: whenever c l p s ( e l e c t r o m a g n e t o f f ) do ( do (next {clpp(Token_pos ) ) • 0) }) w a t c h i n g c l p s ( c o n v e y o r : o n ) (b) Reactive Figure 4.18: Active and Reactive Models of the A r m Component This approach has several benefits, first the models are more declarative because they represent CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 56 just what the artifact does given different control signals. But more importantly, the controller program is now part of the system description and so faults in the controller can also be diagnosed. 4 . 5 Diagnosis of Timeouts In section 4.3.4, a state-based approach to diagnosing dynamic systems is presented. In this approach the dynamic system is modeled as a determinate state machine. The observations were then assumed to coincide with the observable states. Although this technique works, it has several limitations that reduce it effectiveness. One of the biggest limitations is that the history of stores must be reduced so that only the observations that coincide with the model's states are retained. This becomes difficult when the model is not working properly. For instance, if sensor_3 in the L E G O factory fails, there will be no way to tell when the token has moved from T4 to r5 based on the sensor readings (see figure 4.15). One solution could be to know how quickly the conveyor moves and how often the sensors are sampled in order to determine which sensor readings correspond to which position. However, this approach is inexact because there are no guarantees that the actual conveyor belt speed is constant. A better approach would be to avoid filtering altogether and to use all sensor readings in the diagnosis. If sensor_3 does fail to detect a token, then a second problem occurs besides the fact that the transition from T4 to T5 is difficult to detect. This problem is that sensor_3 is used as a trigger to turn off the conveyor belt and to terminate the control program. If sensor_3 fails, then these events will never happen. To deal with this possibility, the tec control program includes a watchdog timer which assumes a failure and shuts the system down if a token is not detected under sensor_3 within a set number of clock ticks. In modeling the system, the behaviour of this watchdog timer should also be introduced into the tec model. However, this creates a problem because the watchdog requires clock ticks as its observations. This means that it is no longer possible to abstract time out of the system description. Adding the watchdog timer model to the SD greatly expands the number of states in the system description, but more importantly it also allows nondeterminacy into the diagnosis problem. The reason for this is that the speed of the physical model is not always constant and this introduces some variability into the observations, even when the system is operating correctly. In the untimed model of figure 4.15 the token path was divided up into 6 regions. In effect, the model created a 6 state, state machine with one state per region and the observations were chosen CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 57 Observation # Observations Untimed 1 sensorl:on 2 sensorl:off Timed 1 1 tick:on, sensorl:on 2 tick:on, sensorl:on 3 tick:on, sensorl:off Timed2 1 tick:on, sensorl:on 2 tick:on, sensorl:off 3 tick:on, sensorl:off Table 4.4: Observations of Correct Behaviour for Untimed and Timed Models so that there was one observation per state. The addition of time changes the model so that state transitions are taken on clock ticks. This adds many more states to the model because many clock ticks occur while the token travels between the sensors. Also, because the belt speed is variable and not synchronized with the clock, there is some variation in the observations. This is shown in table 4.5, which gives a series of sensor readings for the token travelling from T\ to T2. The unique set of sensor readings for the state-based model of figure 4.15 can turn into different, yet still correct series of readings when clock ticks are introduced, as with the example sensor readings Timed 1 and Timed2. Wi th a correctly working system the SD U OBS is consistent, but if there is some variation in OBS in a correctly working system then it means that the SD must have a degree of flexibility in order to maintain consistency. The solution for this need for flexibility is to do the modeling with constraints. Although tec supports modeling with constraints, all of the examples presented so far have used basic equality constraints. However the need for more flexibility demands the use of less restrictive inequalities. To model the variability in the speed of the conveyor, it is divided up into 30 regions. This number was chosen to allow a range of behaviours to be modeled, while still keeping the total number of states manageable. The variable speed of the conveyor belt is then modeled by assuming it to move forward between one and two regions every clock tick. Figure 4.19 shows this flexible model for the belt which extracts the current position (CP) from the store and then ie//s the next position (NP) to be greater by between 1 or 2 conveyor steps. As time progresses, the difference between the minimum and maximum distances that the belt might have travelled becomes increasingly large. At the same time, an increasingly large number of different possible observations will still produce a consistency. W i t h these qualitative models, the CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 58 b e l t ( T p ) :: [CP]$( i f clpp(Tp:CP) then ( {clpfd(CP+ 2 #>= NP)}, { c l p f d ( C P + l #=< NP)}, next { clpp(Tp:NP) } Figure 4.19: Flexible Conveyor Belt Agent observations are more important than ever because they have the side effect of instantiating the constraints with ground values. This approach of using constraints to define the behaviour of the artifact produces much more flexible, qualitative models that are able to accept deviations in normal behaviour. However, the use of inequalities in the model has implications for the simulation process. This is because it is no longer clear when antecedents are entailed by a constraint. For example, if sensor_l is located at position 10 and a one unit wide token has a range of possible positions between 8 and 10, then the token may or may not be under the sensor. When simulating the model, the interpreter deals with this by assuming that the token is under the sensor at the first instance it could be under the sensor. Therefore, in the absence of conflicting information, the interpreter will assume the Tp=10. If however, the contradictory observation that sensor. l=off is added, this assumption must be retracted. Note that this does not cause an inconsistency because pos(Token) >= 8 A pos(Token) <= 10 A of/(Sensor_1) is not inconsistent. Instead, this information is used to refine the constraints planed on the token position which is changed to pos(Token) >= 8 A pos(Token) <= 9. This underlines the importance of sensor agents in using observations to refine the constraints. This behaviour of simulating with constraints can be accomplished simply by making those variables with inequality constraints on them, unrestrained variables. This will cause theory com-pletion to be used to give the constraint variables specific values as long as this does not conflict with any other facts. If it does conflict, the assertion is retracted and the range on the variables is refined. CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 59 Figure 4.20 gives the qualitative reactive model for the L E G O factory, however this model only includes the third sensor at the end of the conveyor. The belt agent models the conveyor by assuming it moves between one and two positions forward every clock tick. It does this by extracting the current position from the store and then using the clp(S) constraint system to modify the range of possible position for the belt in the next iteration. If the correct set of observations does not include a clock tick, then the next position will be the same as the current. The blank agent models the token on the conveyor belt. The token is modeled as having a width of four units where the leading edge is four units ahead of the trailing edge. The sensor, located at position 35, extracts the token's leading and trailing edges from the store in order to determine if the token lies underneath it. The abvars agent is a dummy agent that is placed in Delta for the sole purpose of making Pa and Pb (the variables that hold the token's trailing and leading edge positions, respectively) unrestrained variables. If Pa and Pb are not ground then the sensor agent will use theory completion to tell Pa <— 35 and Pb >= 35 unless these constraints are inconsistent with any constraints already placed on Pa and Pb. When the theory completion is done, the sensor will be assumed to be on. If however, an observation that the sensor is off conflicts with this, then the negation of the constraint will be told and the constraints defining the current position of the token will be refined. The factory agent is written in a reactive style. In the L E G O factory the the arm picks the token out of the cart and then holds it over the conveyor belt with its electromagnet. When the magnet is turned off, the token drops down to its initial position on the the conveyor and the token remains there until the conveyor belt is turned on. This is modeled in the factory agent by setting the trailing edge of the token to position 0 and keeping it at 0 until the signal that turns on the conveyor is sent. When the controller turns on the conveyor belt by ie//ing clps (conveyor: on), the factory agent will then start the conveyor-step agent which runs all the component agents in the factory. This factory-step agent is run in every iteration until a conveyor:off signal is received and the simulation ends. If the sensor is faulty, then it will not detect the token and the watchdog agent will be used to halt the system. This is a case of a timeout, and the error will be detected due to the conflict between the model predicting the token being detected by the sensor, but having no such observation occur. This timeout will be diagnosed when the sensor agent is put in Delta. This removes the behaviour of the sensor from the model and it will not detect the modeled token underneath. Again the watchdog agent ends the simulation, and in this case, the predicted behaviour wil l match the observations and CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH T C C 60 consistency between them will be restored, thus confirming the sensor as a diagnosis. 4.6 Summary The L E G O factory that was constructed turned out to be very useful for several reasons. First, it demonstrated conclusively tec's ability to control real-time systems. Tec's properties of declarative-ness and parallelism made the task of writing the control program very straightforward. Second, this physical artifact demonstrated the ability of tec to model a dynamic system as a state machine, but it also uncovered a weakness of this approach. This weakness is due the difficulty in choosing which particular observations, out of a large set, correspond to the distinct states of the state machine model. This difficulty led to the creation of a more flexible model of the artifact using inequality constraints. This model was able to accurately represent the slight variations in the behaviour of the artifact and removed the need to filter the obseryations. Also, by creating a 'reactive' model, it was possible to incorporate the controller into the system description. This allowed the model to behave in response to controller signals just as the actual artifact would. Since the controller contained a watchdog timer, the effects of timeouts could then be simulated and their causes diagnosed. This proved very effective, although the very large number of possible states in the flexible, reactive model greatly increased the time required to prove an inconsistency. CHAPTER 4. MODELING, FAULT DETECTION AND DIAGNOSIS WITH TCC 61 b e l t ( T p ) :: [CP]$( i f clpp(Tp:CP) then ( i f c l p s ( t i c k : o n ) then ( {clpfd(CP+ 2 #>= NP)}, { c l p f d ( C P + l #=< NP)}, next {clpp(Tp:NP)} ) e l s e ( { c l p f d ( C P #= NP)}, next {clpp(Tp:NP)} ) ) ) . blank(Ta,Tb) :: [CP,NP]$( i f c lpp(Ta:CP) then i f clpfd(CP+ 4 #= NP) then {clpp(Tb:NP)} ) • sensor(Ta,Tb,Pa,Pb) :: i f c l p p ( T a : P a ) then ( i f clpp(Tb:Pb) then ( i f c l p f d ( P a #=< 35 #/\ 35 #=< Pb) then { c l p s ( c o n v e y o r _ s e n s o r 3 : o n ) } e l s e { c l p s ( c o n v e y o r _ s e n s o r 3 : o f f ) } ) ) . ' . •• -a b v a r s ( _ T a , _ T b ) . c o n v e y o r _ s t e p ( T a , T b ) :: [ P a , P b ] A ( a b v a r s ( P a , P b ) , b e l t ( T a ) , b l a n k ( T a , T b ) , sensor(Ta,Tb,Pa,Pb) ) • f a c t o r y ( T a , T b ) :: ( whenever c l p s ( e l e c t r o m a g n e t : o f f ) do ( do (next {clpp(Ta: 0 )}) w a t c h i n g c l p s ( c o n v e y o r : o n ) ) , whenever c l p s ( c o n v e y o r : o n ) do ( do conveyor_step(Ta,Tb) w a t c h i n g c l p s ( c o n v e y o r : o f f ) ) Figure 4.20: Reactive, Qualitative Factory System Description in tec Chapter 5 Conclusions and Future Work 5.1 Conclusions This thesis has presented a novel approach to model-based diagnosis by applying the techniques of consistency-based diagnosis to the concurrent constraint languages cc and tec. This is an important result because it directly addresses one of the biggest drawbacks of model-based diagnosis, the difficulty in creating useful models. Cc and tec allow models to be written that are expressive, declarative, composible, flexible and able to model temporal behaviour. In conducting this research, interpreters for cc and tec were written and then modified to incorporate the consistency-based diagnosis procedure, creating a generic framework for diagnosing cc or tec programs or for models written in these languages. Consistency-based diagnosis was first applied to the simpler cc language. Cc offers many advantages as a modeling language; its models are declarative, allow concurrent processes to be easily represented and incorporates the expressiveness of predicate logic and these beneficial properties extend to diagnosis. Most of the standard examples commonly cited in the diagnosis literature have been reproduced with models that are short, clear and declarative. In applying consistency-based diagnosis to cc, a process of theory completion was identified, along with its ability to provide sample input and output values for failed components, which aid in 'explaining' the diagnoses. Furthermore, it was shown that fault models could be incorporated and that this framework could be used to perform another form of model-based diagnosis: abductive diagnosis. This diagnostic approach was then extended to tec, allowing temporal models of artifacts 62 CHAPTER 5. CONCLUSIONS AND FUTURE WORK 63 exhibiting much more complex behaviour to be written for both simulation and diagnosis. This is important because this diagnostic approach with tec is the first we are aware of that is able to use a first-order temporal modeling language while existing comparable approaches use propositional temporal modeling languages. Additionally, tec's usefulness as a real-time control language was demonstrated by its ability to control a L E G O factory model constructed as a testbench. This too is a useful result because not only does it demonstrate ice's applicability in filling very different roles, it also means that faults in both the control program and in the model can be diagnosed with the same procedure. Wi th this testbench, ice's ability to model and diagnose systems that can be represented as a series of discrete states was demonstrated. Finally, it was shown that the inequality constraints of tec allow flexible, qualitative models to be constructed that are able to reflect a range of correct behaviours while still conflicting with observations of faulty behaviour. Furthermore, together the temporal and default logic properties of tec allow timeouts to be detected and their causes diagnosed. Timeouts are very commonly used to detect errors in industrial systems and, to the best of our knowledge, this is the first approach able to diagnose possible causes for these failures. However, there are still limitations to this approach, these have been addressed in the next section. 5.2 Future Work Although this research has shown that cc and tec can be adapted for use with consistency-based diagnosis, this work is really only a preliminary investigation. In the course of conducting this research many questions have arisen that demand further investigation before industrial application of these ideas wil l be feasible. In other cases, new opportunities have appeared that offer great potential to extend these techniques even further. • Scalability In almost all model-based diagnosis literature, including this thesis, only small, illustrative examples are presented. If these techniques are to achieve wide spread use, they must be able to scale up to the much greater complexity of real-world systems. However, many model-based techniques do not scale well because the search space for determining consistency grows exponentially with the number of components in the model. Tec however offers some hope in this 3X6cLj cLS it supports the creation of hierarchical models. The creation of hierarchical CHAPTER 5. CONCLUSIONS AND FUTURE WORK 64 models and their use in diagnosis deserve more research. • Mode Identification and Planning In the work of Williams and Nayak for Deep Space One [32] it was shown that the same models used for diagnosis could also be used for mode identification and for planning. Similar to diagnosis, mode identification is the use of a system model and available observations to determine the current state of the artifact. Planning uses the same model to determine what inputs are required to take the artifact from its current state to a new, goal state. Both of these activities are possible with the modified tec interpreter used for this thesis, but doing so has not been explored. • Probability and Cost Functions W i t h basic consistency-based diagnosis, there are often multiple diagnoses produced for an observed failure, but there is no way to discriminate among them, other than to say that single faults are more likely than double faults, etc. Wi th the addition of failure probabilities, it becomes possible to rank diagnoses from most to least likely. If tec models are being used for planning, then it is also possible to associate a cost function with the state transitions allowing the lowest cost route to a goal to be planned. • Sharing Constraints Although cc and tec are built on top of one or more underlying constraint systems, the imple-mentation does not allow different constraints to be shared among variables. This is a problem with the implementation of the constraint system. • Improving Search Efficiency One problem with using tec for consistency-based diagnosis that has become apparent is that models constructed from constraints are consistent with a large range of behaviours. This means that when observations of faulty behaviour are introduced, it requires the examination of a very large search space to determine that this faulty behaviour is indeed inconsistent with the model. Solving this problem needs more research and may require the use of heuristics. • Extending tec Diagnosis to hec Hybrid cc [26], or hec, is yet another language in the family of cc languages. Hcc models systems that vary both continuously and discretely with time (hence "hybrid"). As such, it presents a whole new range of possibilities in diagnosis. Hcc may allow even more complex CHAPTER 5. CONCLUSIONS AND FUTURE WORK systems to be diagnosed, especially those already characterized by differential equations. References [1] G . Berry and G . Gonthier. The E S T E R E L synchronous programming language: design, se-mantics, implementation. Science of Computer Programming, 19:85-98, 1992. [2] Gregory W . Bond, Andrew Watkins, and Peter Lawrence. Consistency-based diagnosis of concurrent constraint programs. Available at h t t p : //www. ece. ubc. ca/"bond/papers/cc. pdf, 1998. [3] K . L . Clark and S. Gregory. A relational language for parallel programming. Technical report, Imperial College, July 1981. Technical Report Res Report D O C 81/16. [4] Phillippe Codognet and Vijay A . Saraswat. Abduction in concurrent constraint programming. Available at ftp://ftp.pare.xerox.com/pub/ccp/cc-abduction/cc-abduction.4050ps . [5] Adnan Darwiche. Model-based diagnosis using structured system descriptions. Journal of Artificial Intelligence Research, 8:165-222, 1998. [6] Adnan Darwiche and Gregory Provan. Exploiting system structure in model-based diagnosis of discrete-event systems. In Working Papers of the 1th International Workshop on Principles of Diagnosis (DX-96), pages 95-105, 1996. [7] Randall Davis. Diagnostic reasoning based on structure and behavior. Artificial Intelligence, 24:347-410, 1984. [8] Randall Davis and Walter Hamscher. Model-based reasoning: Troubleshooting. In H . E . Shrobe, editor, Exploring Artificial Intelligence: Survey Talks from the National Conferences on Artifi-cial Intelligence, pages 297-346. Morgan-Kaufmann, 1988. [9] J . de Kleer and J . S. Brown. Model-based diagnosis in S O P H I E III. In L . Console, W . Hamscher, and J . de Kleer, editors, Readings in Model-Based Diagnosis, pages 179-205. Morgan-Kaufmann, 1992. [10] Johan de Kleer and Brian C. Williams. Diagnosis with behavioral modes. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (IJCAI-89), volume 2, pages 1324-1330,1989. [11] John Durkin. Expert systems: A view of the field. IEEE Expert, pages 56-63, Apr i l 1996. 66 BIBLIOGRAPHY 67 [12] Gerhard Friedrich and Franz Lackinger. Diagnosing temporal misbehaviour. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence (IJCAI-91), volume 2, pages 1116-1122, 1991. [13] Michael R. Genesereth. The use of design descriptions in automated diagnosis. Artificial Intelligence, 24:411-436, 1984. [14] Paul Le Guernic, Thierry Gautier, Michel Le Borgne, and Claude Le Maire. Programming real-time applications with S I G N A L . Proceedings of the IEEE, 79(9): 1321-1336, 1991. [15] Vineet Gupta, Radha Jagadeesan, Vijay Saraswat, and Danny Bobrow. Programming in hybrid constraint languages. In Antsaklis, Kohn, Nerode, and Sastry, editors, Hybrid Systems II, number 999 in L N C S . Springer Verlag, 1996. Available at http://www.parc.xerox.com/spl/ projects/mbc/publications/mbc-hccprogramming-lncs999.ps. [16] Nicholas Halbwachs, Paul Caspi, Pascal Raymond, and Daniel Pilaud. The synchronous data flow programming language L U S T R E . Proceedings of the IEEE, 79(9):1305-1320, 1991. [17] Walter Hamscher, Luca Console, and Johan deKleer. Logical foundations - introduction. In Walter Hamscher, Luca Console, and Johan deKleer, editors, Readings in Model-based Diagno-sis. Morgan Kaufmann, 1992. [18] Walter Hamscher and Randall Davis. Diagnosing circuits with state: A n inherently undercon-strained problem. In Proceedings of the National Conference on Artificial Intelligence (AAAI-84), pages 142-147, 1984. [19] J . Jaffar and J . L . Lassez. Constraint logic programming. In Proceedings of the 14th ACM Symposium on Principles of Programming Languages, pages 111-119, 1987. [20] J . Jaffar, S. Michaylov, P. Stuckey, and R. Yap. The clp(7£) language and system. ACM Transactions on Programming Languages and Systems, 14(3):339-395, July 1992. [21] Lal i ta Jagadeesan, Carlos Puchol, and James Von Olnhausen. A formal approach to reactive systems software: A telecommunications application in E S T E R E L . In Proceedings of the 1995 Workshop on Industrial-Strength Formal Specification Techniques, pages 132-145, 1985. [22] K i m Marriott and Peter Stuckey. Programming with Constraints: An Introduction. The M I T Press, 1998. [23] David Poole. Normality and faults in logic-based diagnosis. In Eleventh International Joint Conference on Artificial Intelligence (IJCAI-89), volume 2, pages 1304-1310, 1989. Available at ftp://ftp.cs.ubc.ca/ftp/local/poole/papers/ijcai89.ps.gz. [24] R. Reiter. A logic for default reasoning. Artificial Intelligence, 13(1-2):81—132, 1980. [25] Raymond Reiter. A theory of diagnosis from first principles. Artificial Intelligence, 32:57-95, 1987. [26] Vijay Saraswat and Evan Tick. A retrospective look at concurrent logic programming. Available at ftp://ftp.pare.xerox.com/pub/ccp/jlp-survey/survey-2.ps, June 1993. BIBLIOGRAPHY 68 [27] Vijay A . Saraswat. Concurrent Constraint Programming. A C M Doctoral Dissertation Award, Logic Programming Series. M I T Press, 1993. Doctoral Dissertation 1989, Dept. of Computer Science, Carnegie-Mellon University. [28] Vijay A . Saraswat, Radha Jagadeesan, and Vineet Gupta. Foundations of timed concurrent con-straint programming. In Proceedings of the 9th Annual IEEE Symposium on Logic in Computer Science, 1994. [29] Vijay A . Saraswat, Radha Jagadeesan, and Vineet Gupta. Timed default concurrent constraint programming. Journal of Symbolic Computation, 11:1-100, 1996. Available at http://www. pare.xerox.com/spl/projects/mbc/publications/mbc-defaultTcc-j sc.ps. [30] Murray Shanahan. Prediction is deduction but explanation is abduction. In Proceedings of the l l " 1 International Joint Conference on Artificial Intelligence (IJCAI89), volume 2, pages 1055-1060,1989. [31] Peter Struss and Oskar Dressier. Physical negation: Integrating fault models into the general diagnostic engine. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (IJCAI-89), volume 2, pages 1318-1323, 1989. [32] Brian Williams and P. Pandurang Nayak. A model-based approach to reactive self-configuring systems. In Working Papers of the 7th International Workshop on Principles of Diagnosis (DX-96), pages 267-274, 1996. [33] Hao-Chi Wong, Markus P.J . Fromherz, Vineet Gupta, and Vijay A . Saraswat. Control-based programming of electro-mechanical controllers. In Proceedings of the IJCAI Workshop on Ex-ecutable Temporal Logics, 1995. Available at http://www.parc.xerox.com/spl/projects/ mbc/publicat ions/mbc-controllers-ij cai95.ps. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065035/manifest

Comment

Related Items