Computing textual inferences


Condoravdi, C. Computing textual inferences. Invited talk at Georgetown University; 2008 October 31; Washington, DC.


A measure of understanding a text is the ability to make inferences based on the information conveyed by it. Given a passage of text and a hypothesis, the task would be to automatically infer whether the hypothesis follows from the text, whether it is contradicted by it, or whether it is compatible with it. At PARC we have been working on a system for computing linguistically-based textual inferences such as the ones below.

Passage: Ed has been living in Athens for 3 years. Mary visited Athens in the last 2 years.

Hypothesis: Mary visited Athens while Ed lived in Athens.

Answer: YES

Passage: The diplomat does not know that the president failed to destroy the evidence.

Hypothesis: The president managed to destroy the evidence. Answer: NO Passage: No one stayed throughout the concert.

Hypothesis: No one stayed throughout the first part of the concert.


Texts are parsed to produce packed functional-structures and these are rewritten and canonicalized, without unpacking, into abstract knowledge representations (AKR). An AKR representation is a flat set of facts that involves concepts, roles, temporal relations and contexts.

In this talk I show how AKRs are derived from parsed text and discuss the system’s algorithm for entailment and contradiction detection (ECD). ECD operates on the AKRs of the passage and of the hypothesis in order to detect a potential entailment or contradiction between them, without the need for disambiguation.

Read more from SRI