# Course:CPSC522/Abduction

Sir John Beazley cataloguing an unidentified vase.

## Abduction

Abduction is a technique of reasoning about unexplained observations.

Principal Author: Fabian Ruffy
Collaborators:

## Abstract

When observing unknown manifestations, humans often try to understand the motivation behind the event, forming a new hypothesis. This common-sense process of trying to reason about the world has been an intense subject of study in philosophy, logic theory, and artificial intelligence. Drawing from the philosophical descriptions, logic theory formalised this form of human thinking into a concrete set of literals and clauses, which can be expressed in logic programming. Building on the abductive form of logic programming, artificial intelligence is able to express human reasoning about puzzling events, which often manifests itself in the form of diagnosis or failure-analysis programs. This page provides a general and brief overview over the intuition of abduction, its theoretical background, and applications in relation to artificial intelligence.

### Builds on

Abduction is a form of Inference. Abductive logic programming builds on logic programming and some manifestations use negation as a failure. Probabilistic Abduction on the other hand relies on probability and Bayesian networks.

### Related Pages

Abduction is a fundamental discipline and is thus broadly applicable. Abduction in particular allows to model Bayesian belief networks, which are used to represent information in fields such as biology, engineering, genetics, and or decision support systems. It also finds application in belief revision and knowledge compilation or to give reinforcement learning agents with ability to reason about observations.

## Content

The three techniques of logical inference in the diagnostic cycle.

### General Overview

Abduction (colloquially also referred to as "guessing") is a technique to explain observations of currently unclear cause. It is frequently seen as a form of "guessing" based on the most likely reason. Abduction allows an actor to complete missing information according to the knowledge he possesses about the rules of the world. This form of hypothesis is a common practice in the medical and scientific fields to understand new and unknown observations.[1]

For example, a practitioner observes symptoms in a patient and has to diagnose a medical condition. The cause of the symptom is unclear to him and so is the exact condition of the patient. Based on the pre-acquired knowledge of rules he possesses, and the limited observations he has been able to make, he will have to decide what type of pathogen he is dealing with. Similarly, an evolutionary biologist who observes peculiar behaviour by an animal will try to find a reasonable and logical explanation as to why the animal is behaving as such.

Descriptions of abduction as have existed ever since the Antique in the form of Aristotle's "Apagoge".[2] The intellectual field itself, however, has been pioneered by C.S. Peirce, primarily known for his contributions to pragmatism and semiotics. In his essay, "Deduction, Induction, and Hypothesis", [3] C.S. Peirce divided modes of knowledge inference into the three fundamental categories of deduction, induction, and abduction. As a clarification for his reasoning, he provided the following example:

Deduction Induction Abduction
Rule: All the beans from this bag are white. Case: These beans are selected from this bag. Rule: All the beans from this bag are white.
Case: These beans are from this bag. Result: These beans are white. Result: These beans are white.
Result: These beans are white. Rule: All the beans from this bag are white. Case: These beans are from this bag.

Here, Result represents an observation, Case describes the explanation for the observation, and Rule is an environmental law assumed to always hold true. In all cases, the goal is to augment missing knowledge. Intuitively, abduction runs in a backward direction, rather than the forward one of standard inference. Whereas induction attempts to produce a new rule, and deduction tries to produce an expected result, abduction provides a new, previously unknown hypothesis. In this sense, abduction may be considered an initial step in the scientific method, with induction as the final step.[3] Nonetheless, only deduction, by nature, is a certain conclusion. Abduction and induction result in likely, but unverified, information. Abduction is also non-monotonic, as new information may invalidate previous assumptions and cause current state to be rendered inconsistent.[1]

### Formalisation

Peirce's view of abduction can be formalised into a threefold relation of literals. In concrete logic, abduction operates on the notion of observations, explanations and theories. Based on the observations ${\displaystyle \varphi }$ and theories ${\displaystyle \Theta }$, it is possible to derive a likely and atomic explanation ${\displaystyle \alpha }$, such that:

   ${\displaystyle \Theta ,\alpha \models \varphi }$


For example given the following theories ${\displaystyle \Theta }$:

   grass-is-wet ${\displaystyle \leftarrow }$ rained-last-night
grass-is-wet ${\displaystyle \leftarrow }$ sprinkler-was-on
shoes-are-wet ${\displaystyle \leftarrow }$ grass-is-wet


We can define observations ${\displaystyle \varphi }$ = {shoes-are-wet, grass-is-wet} and explanations ${\displaystyle \alpha }$ = {rained-last-night, sprinkler-was-on}.

If we make the observation shoes-are-wet, we can infer that the grass-is-wet must also be true. If the grass is wet, one possible explanation may be that it rained last night. Note that, in this example, we are unable to distinguish whether rained-last-night or sprinkler-was-on were the true source of the observation.

For ${\displaystyle \alpha }$ to be an explanation of ${\displaystyle \varphi }$ according to ${\displaystyle \Theta }$, it at least must satisfy the following constraints[1]:

1. ${\displaystyle \Theta \cup \varphi \models \alpha }$.
2. ${\displaystyle \Theta \cup }$ is consistent.
3. ${\displaystyle \alpha }$ is "minimal".
4. ${\displaystyle \alpha }$ is atomic or a conjunction of atoms.

This means that, given our theory ${\displaystyle \Theta }$ and the observation ${\displaystyle \varphi }$, we can assume that the explanation ${\displaystyle \alpha }$ holds true. At the same time, ${\displaystyle \Theta \cup \alpha }$ implies that the theory does not contain a contradiction and is logically sound. There are several approaches how to define what exactly it means to be "minimal"[4][1]. In general, it implies that ${\displaystyle \alpha }$ is sufficient in explaining ${\displaystyle \varphi }$. A concrete approach is to define ${\displaystyle \alpha }$ as the "weakest explanation":

Any ${\displaystyle \alpha }$ is the weakest explanation for ${\displaystyle \varphi }$ with respect to ${\displaystyle \Theta }$ iff

1. ${\displaystyle \Theta ,\alpha \models \varphi }$;
2. For all other formulas ${\displaystyle \beta }$ such that ${\displaystyle \Theta ,\beta \models \varphi }$, ${\displaystyle \models \beta \rightarrow \alpha }$;

For example:

   p ${\displaystyle \leftarrow }$ q
p ${\displaystyle \leftarrow }$ q,r


{q} is minimal, whereas {q,r} is not, as r depends on q[4].

Lastly, in line with the condition of minimality, ${\displaystyle \alpha }$ should only consist of atoms or be atomic in order to represent quantifiable information.[4]

#### Example 1

This altered example from Poole's and Mackworth's "Artificial Intelligence: Foundations of Computational Agents" [5] provides an overview over the inference process of explanations under constraints.

We assume an agent knows the following theories:

   bronchitis ${\displaystyle \leftarrow }$ influenza.
bronchitis ${\displaystyle \leftarrow }$ smokes.
coughing ${\displaystyle \leftarrow }$ bronchitis.
wheezing ${\displaystyle \leftarrow }$ bronchitis.
fever ${\displaystyle \leftarrow }$ influenza.
soreThroat ${\displaystyle \leftarrow }$ influenza.


And can come up with these explanations:

    smokes, nonsmoker, influenza.


He is aware of the following constraint rule:

   false ${\displaystyle \leftarrow }$ smokes ${\displaystyle \land }$ nonsmoker.


If the agent observes wheezing, there are two minimal explanations:

   {influenza} and {smokes}


Both of these explanations lead to bronchitis and coughing.

If wheezing and fever is observed, there is only one minimal explanation:

   {influenza}.


When wheezing is observed, the agent reasons that it must be bronchitis, and so influenza and smokes are the hypothesised culprits. However, if fever were also observed, the patient must have influenza, so there is no need for the hypothesis of smokes; it has been explained away.

If wheezing and nonsmoker was observed instead, there is one minimal explanation:

   {influenza, nonsmoker}


{influenza, smoker} would violate constraint rule 2), as the explanation directly conflicts with the observation of nonsmoker.

### Abduction and Logic Programming

A practical approach to represent the reasoning and knowledge of abduction is Logic Programming. Logic Programming provides the possibility of solving abductive or goal-oriented logic problems by providing a declarative framework.[4] In general, valid explanations are referred to as (assumables[5]) or (abducibles[4]) and the set of observations and theories as knowledge base ${\displaystyle KB}$. To ensure consistent state, integrity constraints are expressed in the form of constraint handling rules.

Logic Programming as a discipline evolved in the 1970s. A major part of the field has been pioneered by Colmenauer, Kowalski, and Roussel, who developed the Prolog programming language.[1] Prolog is inspired by First-order logic, and is an implementation of logic theory, which features queries and an automatic resolution mechanism to expand its knowledge base. Resolution in Prolog can be viewed as a form of abduction, with the purpose of completing a program with the facts needed for a query to succeed.

In Prolog, a program P is an ordered set of rules (the knowledge base) and facts (assumables). Rules are restricted to horn-clauses in which each member is either an atom or its negation. A Prolog program accepts a query (which is the theorem) and attempts to prove the query using its existent knowledge base. If the query follows from the program, a positive answer is produced, indicating a successful query.

#### Knowledge Assimilation

A major feature of abduction in logic programming is the assimilation of new knowledge into the existent knowledge base KB.[4] In an abductive framework, KB has to be continuously updated in order to accommodate newly emerging rules or explanations (see also Belief Revision (Wikipedia)). Four common cases are handled in Abductive Logic Programming once new information KB* has been acquired:

1. If KB* can already be deduced from existent available rules and assumables, the knowledge base is not updated. As a result, KB' is equal to KB.
2. It is possible to split the knowledge base into two parts, KB1 and KB2, in which the KB1 U KB* = KB2. The new knowledge base is thus KB1 and KB* combined.
3. KB* violates the integrity of KB. Valid state can only be restored by rejecting or changing one of the assumptions that has led to the contradiction.
4. If KB* is independent from KB, the new information is added to KB and forms KB'.
##### Example 2

This example from "Abductive Logic Programming"[4] demonstrates the behaviour of knowledge assimilation in Abductive Logic Frameworks. Assuming the current KB consists of the following rules:

   sibling(x,y) ${\displaystyle \leftarrow }$ parent(z,x), parent(z,y)
parent(x, y) ${\displaystyle \leftarrow }$ father(x, y)
parent(x, y) ${\displaystyle \leftarrow }$ mother(x, y)


existent assumables:

   father(John, Mary)
mother(Jane, Mary)


and integrity constraints:

   x = y <— father(x,z), father(y,z)
x = y <— mother(x, z), mother(y, z)


Under these conditions, the parents must be the same, to avoid biological inconsistencies. Suppose agent observes:

   sibling(Mary, Bob)


This can be abduced into either of the two minimal additional KB updates:

   father(John, Bob)
mother(Jane, Bob)


Both of these updates satisfy the integrity constraints and can be assumed for the time being. If we are given a further observation however:

   mother(Joan, Bob).


mother(Jane, Bob) now violates the constraints and has to be removed. The agent applies a belief revision process and updates the knowledge base.

Figure 1. A diagram of switches and wires to model an observable environment.

#### Probabilistic Abduction

As demonstrated in Example 1, abduction based on first-order logic and constraint rules may not be sufficient to pick a unique explanation. Minimality does not guarantee certainty in the decision-making process. When deriving a diagnosis or explanation over multiple possible hypotheses looking at the past probabilistic distribution and likelihood may be necessary in order to make a decision. In Probabilistic Abduction, the knowledge base is augmented with probabilistic knowledge about the prior probabilities of explanations and conditional likelihood of varying combinations of observations and explanations. A potential solution to a problem is a set of explanations which maximizes the overall likelihood of the observed event.[6] In this sense, probabilistic abduction attempts to provide a deductive solution based on its current assumptions about observed likelihoods. This maximization is usually undertaken using the Bayes Theorem and Bayesian Networks. Forms of probabilistic abduction include:

1. Pearl belief networks[7]
2. Probabilistic horn abduction[8]
3. Probabilistic markov logic networks[9]

#### Example 3 (Prolog)

Figure 1 describes a sample electrical environment used Poole's and Mackworth's "Artificial Intelligence: Foundations of Computational Agents" [5]. One possible way to express this scenario is to write up a Prolog program. In this case, constraint handling rules are used to specify integrity constraints. [10]

Lines with chr_constraint describe the abducibles (or assumables) in our environment, whereas lines with ==> represent integrity constraints to guarantee consistent state. The remaining code describes the current known rules in the system. The entire program acts as our knowledge base, which we can query to explain observed events.

 

  :- use_module(library(chr)).  :- chr_constraint ok_s1, ok_s2, ok_s3.  :- chr_constraint ok_cb1.  :- chr_constraint ok_l1, ok_l2.  :- chr_constraint live_outside.  :- chr_constraint broken_s1, broken_s2, broken_s3.  :- chr_constraint broken_cb1.  :- chr_constraint broken_l1.  :- chr_constraint outside_power_down.  :- chr_constraint up_s1, up_s2.  :- chr_constraint down_s1, down_s2. ok_s1, broken_s1 ==> false. ok_s2, broken_s2 ==> false. up_s1, down_s1 ==> false. up_s2, down_s2 ==> false. live_outside, outside_power_down ==> false. broken_l1, ok_l1 ==> false. live_w5:- live_outside. dead_w5:- outside_power_down. live_w4:- live_w3, ok_s3. dead_w4:- broken_s3. dead_w4:- dead_w3. live_w3:- live_w5, ok_cb1. dead_w3:- broken_cb1. dead_w3:- dead_w5. live_w2:- live_w3, down_s1, ok_s1. dead_w2:- broken_s1. dead_w2:- dead_w3. live_w1:- live_w3, up_s1, ok_s1. dead_w1:- broken_s1. dead_w1:- dead_w3. live_w0:- live_w1, up_s2, ok_s2. live_w0:- live_w2, down_s2, ok_s2. dead_w0:- broken_s2. dead_w0:- up_s2, dead_w1. dead_w0:- down_s2, dead_w2. lit_l2:- live_w4, ok_l2. dark_l2:- broken_l2. dark_l2:- dead_w4. lit_l1:- live_w0, ok_l1. dark_l1:- broken_l1. dark_l1:- dead_w0. 

If we observe lit_l1 (light 1 being on), the Prolog program will provide us with two possible combinations to achieve this state:

ok_s1, ok_s2, ok_cb1, ok_l1, live_outside, up_s1, up_s2 or ok_s1, ok_s2, ok_cb1, ok_l1, live_outside, down_s1, down_s2

On the other hand, if we observe dark_l1, we receive multiple possible solutions to achieve this outcome.

broken_l1, broken_s2, broken_s1, broken_cb1, up_s2, outside_power_downup_s2 broken_s1, down_s2, broken_cb1, down_s2, and outside_power_down, down_s2

Note that the exact combination of outcome state may depend on the solver library being used. In the Prolog CHR library, broken_cb1, down_s2 is treated as preferable explanation over broken_cb1, up_s1, down_s2 as it is more minimal.

### Use in Artificial Intelligence

Abductive Logic Programming is a common tool in artificial intelligence and applications leveraging abduction are manifold. By observing its environment an agent can augment its knowledge base and construct new theories to describe its surroundings. This is particularly useful for diagnostic, cognitive, or expert systems, which operate in uncertain environments. One particular example is BACON[11], which formulates scientific laws based on heuristically provided information. After acquiring enough data, the system is able to construct its own hypotheses akin to early researcher discoveries. Similarly, fault-diagnosis often relies on abductive methods to report and automatically heal bugs or failures in a system[12].

## References

1. Aliseda-Llera, Atocha. Seeking explanations: Abduction in logic, philosophy of science and artificial intelligence. Diss. PhD thesis: Stanford University, Department of Philosophy, Stanford, 1997.
2. Flick, Uwe, Ernst von Kardoff, and Ines Steinke, eds. A companion to qualitative research. Sage, 2004.
3. McKaughan, D. J. "From Ugly Duckling to Swan: C. S. Peirce, Abduction, and the Pursuit of Scientific Theories." Transactions of the Charles S. Peirce Society: A Quarterly Journal in American Philosophy, vol. 44 no. 3, 2008, pp. 446-468. Project MUSE, muse.jhu.edu/article/252833.
4. Kakas, Antonis C., Robert A. Kowalski, and Francesca Toni. "Abductive logic programming." Journal of logic and computation 2.6 (1992): 719-770.
5. Poole, David L., and Alan K. Mackworth. Artificial Intelligence: foundations of computational agents. Cambridge University Press, 2010.
6. Eiter, Thomas, and Georg Gottlob. "The complexity of logic-based abduction." Journal of the ACM (JACM) 42.1 (1995): 3-42.
7. Pearl, Judea. "Fusion, propagation, and structuring in belief networks." Artificial intelligence 29.3 (1986): 241-288.
8. Poole, David. "Probabilistic Horn abduction and Bayesian networks." Artificial intelligence 64.1 (1993): 81-129.
9. Kate, Rohit J., and Raymond J. Mooney. "RJ: Probabilistic abduction using Markov logic networks." In: IJCAI-09 Workshop on Plan, Activity, and Intent Recognition. 2009.
10. Christiansen, Henning, and Veronica Dahl. "Assumptions and abduction in Prolog." 3rd International Workshop on Multiparadigm Constraint Programming Languages, MultiCPL. Vol. 4. 2004.
11. Langley, Pat, Gary L. Bradshaw, and Herbert A. Simon. "Rediscovering chemistry with the BACON system." Machine Learning, Volume I. 1983. 307-329.
12. Isermann, Rolf. Fault-diagnosis systems: an introduction from fault detection to fault tolerance. Springer Science & Business Media, 2006.

## Annotated Bibliography

Aliseda-Llera, Atocha. Seeking explanations: Abduction in logic, philosophy of science and artificial intelligence. Diss. PhD thesis: Stanford University, Department of Philosophy, Stanford, 1997.

Kakas, Antonis C., Robert A. Kowalski, and Francesca Toni. "Abductive logic programming." Journal of logic and computation 2.6 (1992): 719-770.

Poole, David. "Logic programming, abduction and probability." New Generation Computing 11.3-4 (1993): 377.

Poole, David L., and Alan K. Mackworth. Artificial Intelligence: foundations of computational agents. Cambridge University Press, 2010.

Isermann, Rolf. Fault-diagnosis systems: an introduction from fault detection to fault tolerance. Springer Science & Business Media, 2006.

Langley, Pat, Gary L. Bradshaw, and Herbert A. Simon. "Rediscovering chemistry with the BACON system." Machine Learning, Volume I. 1983. 307-329.

Flick, Uwe, Ernst von Kardoff, and Ines Steinke, eds. A companion to qualitative research. Sage, 2004.

McKaughan, D. J. "From Ugly Duckling to Swan: C. S. Peirce, Abduction, and the Pursuit of Scientific Theories." Transactions of the Charles S. Peirce Society: A Quarterly Journal in American Philosophy, vol. 44 no. 3, 2008, pp. 446-468. Project MUSE, muse.jhu.edu/article/252833.

Eiter, Thomas, and Georg Gottlob. "The complexity of logic-based abduction." Journal of the ACM (JACM) 42.1 (1995): 3-42.

Pearl, Judea. "Fusion, propagation, and structuring in belief networks." Artificial intelligence 29.3 (1986): 241-288.

Poole, David. "Probabilistic Horn abduction and Bayesian networks." Artificial intelligence 64.1 (1993): 81-129.

Kate, Rohit J., and Raymond J. Mooney. "RJ: Probabilistic abduction using Markov logic networks." In: IJCAI-09 Workshop on Plan, Activity, and Intent Recognition. 2009.