UBC Theses and Dissertations
A conditional model of abduction Becher, Verónica
We propose sometimes very plausible hypotheses as explanations for an observation, given what we know or believe. In light of new information, however, such explanations may become void. Under this view explanations are defeasible. In this thesis we present a general model of explanation in artificial intelligence based on a theory of belief revision, (or the process of expanding, correcting and contracting existing beliefs) which models the dynamics of belief in a given representation. We take seriously the notion that explanations can be constructed from default knowledge and should be determined relative to a background set of beliefs. Based on the idea that an explanation should not only render the observation plausible but be itself maximally plausible, a preference ordering on explanations naturally arises. Our model of abduction (the process of inferring the preferred explanations) uses existing conditional logics for default reasoning and belief revision, based on standard modal systems. We end up with a semantics for explanation in terms of sets of possible worlds, which are simple modal structures also suitable for other forms of defeasible reasoning. The result of this thesis is an object-level account of abduction based on the revision of the epistemic state of a program or agent. Abductive frameworks like Theorist, and consistency based diagnosis (today's "canonical" frameworks for diagnosis) are shown to be captured in our work.
Item Citations and Data