Logical Agents
Philipp Koehn
5 March 2020
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
1
The world is everything that is the case.
Wittgenstein, Tractatus
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Outline 2
● Knowledge-based agents
● Logic in general—models and entailment
● Propositional (Boolean) logic
● Equivalence, validity, satisfiability
● Inference rules and theorem proving
– forward chaining
– backward chaining
– resolution
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
3
knowledge-based agents
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Knowledge-Based Agent 4
● Knowledge base = set of sentences in a formal language
● Declarative approach to building an agent (or other system):
T ELL it what it needs to know
● Then it can A SK itself what to do—answers should follow from the KB
● Agents can be viewed at the knowledge level
i.e., what they know, regardless of how implemented
● Or at the implementation level
i.e., data structures in KB and algorithms that manipulate them
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
A Simple Knowledge-Based Agent 5
function KB-AGENT( percept) returns an action
static: KB, a knowledge base
t, a counter, initially 0, indicating time
T ELL(KB, M AKE -P ERCEPT-S ENTENCE( percept, t))
action ← A SK(KB, M AKE -ACTION -Q UERY(t))
T ELL(KB, M AKE -ACTION -S ENTENCE(action, t))
t←t + 1
return action
● The agent must be able to
– represent states, actions, etc.
– incorporate new percepts
– update internal representations of the world
– deduce hidden properties of the world
– deduce appropriate actions
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
6
example
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Hunt the Wumpus 7
Computer game from 1972
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Wumpus World PEAS Description 8
● Performance measure
– gold +1000, death -1000
– -1 per step, -10 for using the arrow
● Environment
– squares adjacent to wumpus are smelly
– squares adjacent to pit are breezy
– glitter iff gold is in the same square
– shooting kills wumpus if you are facing it
– shooting uses up the only arrow
– grabbing picks up gold if in same square
– releasing drops the gold in same square
● Actuators Left turn, Right turn,
Forward, Grab, Release, Shoot
● Sensors Breeze, Glitter, Smell
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Wumpus World Characterization 9
● Observable? No—only local perception
● Deterministic? Yes—outcomes exactly specified
● Episodic? No—sequential at the level of actions
● Static? Yes—Wumpus and Pits do not move
● Discrete? Yes
● Single-agent? Yes—Wumpus is essentially a natural feature
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Exploring a Wumpus World 10
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Exploring a Wumpus World 11
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Exploring a Wumpus World 12
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Exploring a Wumpus World 13
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Exploring a Wumpus World 14
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Exploring a Wumpus World 15
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Exploring a Wumpus World 16
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Exploring a Wumpus World 17
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Tight Spot 18
● Breeze in (1,2) and (2,1)
Ô⇒ no safe actions
● Assuming pits uniformly distributed,
(2,2) has pit w/ prob 0.86, vs. 0.31
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Tight Spot 19
● Smell in (1,1)
Ô⇒ cannot move
● Can use a strategy of coercion: shoot straight ahead
– wumpus was there Ô⇒ dead Ô⇒ safe
– wumpus wasn’t there Ô⇒ safe
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
20
logic in general
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Logic in General 21
● Logics are formal languages for representing information
such that conclusions can be drawn
● Syntax defines the sentences in the language
● Semantics define the “meaning” of sentences;
i.e., define truth of a sentence in a world
● E.g., the language of arithmetic
– x + 2 ≥ y is a sentence; x2 + y > is not a sentence
– x + 2 ≥ y is true iff the number x + 2 is no less than the number y
– x + 2 ≥ y is true in a world where x = 7, y = 1
x + 2 ≥ y is false in a world where x = 0, y = 6
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Entailment 22
● Entailment means that one thing follows from another:
KB ⊧ α
● Knowledge base KB entails sentence α
if and only if
α is true in all worlds where KB is true
● E.g., the KB containing “the Ravens won” and “the Jays won”
entails “the Ravens won or the Jays won”
● E.g., x + y = 4 entails 4 = x + y
● Entailment is a relationship between sentences (i.e., syntax)
that is based on semantics
● Note: brains process syntax (of some sort)
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Models 23
● Logicians typically think in terms of models, which are formally
structured worlds with respect to which truth can be evaluated
● We say m is a model of a sentence α
if α is true in m
● M (α) is the set of all models of α
⇒ KB ⊧ α if and only if M (KB) ⊆ M (α)
● E.g. KB = Ravens won and Jays won
α = Ravens won
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Entailment in the Wumpus World 24
● Situation after detecting nothing in [1,1], moving right, breeze in [2,1]
● Consider possible models for all ?, assuming only pits
● 3 Boolean choices Ô⇒ 8 possible models
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Possible Wumpus Models 25
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Valid Wumpus Models 26
KB = wumpus-world rules + observations
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Entailment 27
KB = wumpus-world rules + observations
α1 = “[1,2] is safe”, KB ⊧ α1, proved by model checking
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Valid Wumpus Models 28
KB = wumpus-world rules + observations
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Not Entailed 29
KB = wumpus-world rules + observations
α2 = “[2,2] is safe”, KB ⊧
/ α2
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Inference 30
● KB ⊢i α = sentence α can be derived from KB by procedure i
● Consequences of KB are a haystack; α is a needle.
Entailment = needle in haystack; inference = finding it
● Soundness: i is sound if
whenever KB ⊢i α, it is also true that KB ⊧ α
● Completeness: i is complete if
whenever KB ⊧ α, it is also true that KB ⊢i α
● Preview: we will define a logic (first-order logic) which is expressive enough to
say almost anything of interest, and for which there exists a sound and complete
inference procedure.
● That is, the procedure will answer any question whose answer follows from what
is known by the KB.
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
31
propositional logic
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Propositional Logic: Syntax 32
● Propositional logic is the simplest logic—illustrates basic ideas
● The proposition symbols P1, P2 etc are sentences
● If P is a sentence, ¬P is a sentence (negation)
● If P1 and P2 are sentences, P1 ∧ P2 is a sentence (conjunction)
● If P1 and P2 are sentences, P1 ∨ P2 is a sentence (disjunction)
● If P1 and P2 are sentences, P1 Ô⇒ P2 is a sentence (implication)
● If P1 and P2 are sentences, P1 ⇔ P2 is a sentence (biconditional)
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Propositional Logic: Semantics 33
● Each model specifies true/false for each proposition symbol
E.g. P1,2 P2,2 P3,1
f alse true f alse
(with these symbols, 8 possible models, can be enumerated automatically)
● Rules for evaluating truth with respect to a model m:
¬P is true iff P is false
P1 ∧ P2 is true iff P1 is true and P2 is true
P1 ∨ P2 is true iff P1 is true or P2 is true
P1 Ô⇒ P2 is true iff P1 is false or P2 is true
i.e., is false iff P1 is true and P2 is false
P1 ⇔ P2 is true iff P1 Ô⇒ P2 is true and P2 Ô⇒ P1 is true
● Simple recursive process evaluates an arbitrary sentence, e.g.,
¬P1,2 ∧ (P2,2 ∨ P3,1) = true ∧ (f alse ∨ true) = true ∧ true = true
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Truth Tables for Connectives 34
P Q ¬P P ∧Q P ∨Q P ⇒Q P ⇔Q
false false true false false true true
false true true false true true false
true false false false true false false
true true false true true true true
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Wumpus World Sentences 35
● Let Pi,j be true if there is a pit in [i, j]
– observation R1 ∶ ¬P1,1
● Let Bi,j be true if there is a breeze in [i, j].
● “Pits cause breezes in adjacent squares”
– rule R2 ∶ B1,1 ⇔ (P1,2 ∨ P2,1)
– rule R3 ∶ B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
– observation R4 ∶ ¬B1,1
– observation R5 ∶ B2,1
● What can we infer about P1,2, P2,1, P2,2, etc.?
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Truth Tables for Inference 36
B1,1 B2,1 P1,1 P1,2 P2,1 P2,2 P3,1 R1 R2 R3 R4 R5 KB
false false false false false false false true true true true false false
false false false false false false true true true false true false false
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
false true false false false false false true true false true true false
false true false false false false true true true true true true true
false true false false false true false true true true true true true
false true false false false true true true true true true true true
false true false false true false false true false false true true false
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
true true true true true true true false true true false true false
● Enumerate rows (different assignments to symbols Pi,j )
● Check if rules are satisfied (Ri)
● Valid model (KB) if all rules satisfied
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Inference by Enumeration 37
● Depth-first enumeration of all models is sound and complete
function TT-E NTAILS ?(KB, α) returns true or false
inputs: KB, the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
symbols ← a list of the proposition symbols in KB and α
return TT-C HECK -A LL(KB, α, symbols, [ ])
function TT-C HECK -A LL(KB, α, symbols, model) returns true or false
if E MPTY ?(symbols) then
if PL-T RUE ?(KB, model) then return PL-T RUE ?(α, model)
else return true
else do
P ← F IRST(symbols); rest ← R EST(symbols)
return TT-C HECK -A LL(KB, α, rest, E XTEND(P , true, model )) and
TT-C HECK -A LL(KB, α, rest, E XTEND(P , false, model ))
● O(2n) for n symbols; problem is co-NP-complete
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
38
equivalence, validity, satisfiability
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Logical Equivalence 39
● Two sentences are logically equivalent iff true in same models:
α ≡ β if and only if α ⊧ β and β ⊧ α
(α ∧ β) ≡ (β ∧ α) commutativity of ∧
(α ∨ β) ≡ (β ∨ α) commutativity of ∨
((α ∧ β) ∧ γ) ≡ (α ∧ (β ∧ γ)) associativity of ∧
((α ∨ β) ∨ γ) ≡ (α ∨ (β ∨ γ)) associativity of ∨
¬(¬α) ≡ α double-negation elimination
(α Ô⇒ β) ≡ (¬β Ô⇒ ¬α) contraposition
(α Ô⇒ β) ≡ (¬α ∨ β) implication elimination
(α ⇔ β) ≡ ((α Ô⇒ β) ∧ (β Ô⇒ α)) biconditional elimination
¬(α ∧ β) ≡ (¬α ∨ ¬β) De Morgan
¬(α ∨ β) ≡ (¬α ∧ ¬β) De Morgan
(α ∧ (β ∨ γ)) ≡ ((α ∧ β) ∨ (α ∧ γ)) distributivity of ∧ over ∨
(α ∨ (β ∧ γ)) ≡ ((α ∨ β) ∧ (α ∨ γ)) distributivity of ∨ over ∧
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Validity and Satisfiability 40
● A sentence is valid if it is true in all models,
e.g., T rue, A ∨ ¬A, A Ô⇒ A, (A ∧ (A Ô⇒ B)) Ô⇒ B
● Validity is connected to inference via the Deduction Theorem:
KB ⊧ α if and only if (KB Ô⇒ α) is valid
● A sentence is satisfiable if it is true in some model
e.g., A ∨ B, C
● A sentence is unsatisfiable if it is true in no models
e.g., A ∧ ¬A
● Satisfiability is connected to inference via the following:
KB ⊧ α if and only if (KB ∧ ¬α) is unsatisfiable
i.e., prove α by reductio ad absurdum
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
41
inference
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Proof Methods 42
● Proof methods divide into (roughly) two kinds
● Application of inference rules
– Legitimate (sound) generation of new sentences from old
– Proof = a sequence of inference rule applications
Can use inference rules as operators in a standard search alg.
– Typically require translation of sentences into a normal form
● Model checking
– truth table enumeration (always exponential in n)
– improved backtracking
– heuristic search in model space (sound but incomplete)
e.g., min-conflicts-like hill-climbing algorithms
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward and Backward Chaining 43
● Horn Form (restricted)
KB = conjunction of Horn clauses
● Horn clause =
– proposition symbol; or
– (conjunction of symbols) Ô⇒ symbol
e.g., C ∧ (B Ô⇒ A) ∧ (C ∧ D Ô⇒ B)
● Modus Ponens (for Horn Form): complete for Horn KBs
α1 , . . . , α n , α1 ∧ ⋯ ∧ αn Ô⇒ β
β
● Can be used with forward chaining or backward chaining
● These algorithms are very natural and run in linear time
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Example 44
● Idea: fire any rule whose premises are satisfied in the KB,
add its conclusion to the KB, until query is found
P Ô⇒ Q
L ∧ M Ô⇒ P
B ∧ L Ô⇒ M
A ∧ P Ô⇒ L
A ∧ B Ô⇒ L
A
B
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
45
forward chaining
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining 46
● Start with given proposition symbols (atomic sentence)
e.g., A and B
● Iteratively try to infer truth of additional proposition symbols
e.g., A ∧ B Ô⇒ C, therefor we establish C is true
● Continue until
– no more inference can be carried out, or
– goal is reached
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining Example 47
● Given
P Ô⇒ Q
L ∧ M Ô⇒ P
B ∧ L Ô⇒ M
A ∧ P Ô⇒ L
A ∧ B Ô⇒ L
A
B
● Agenda: A, B
● Annotate horn clauses with number of premises
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining Example 48
● Process agenda item A
● Decrease count for horn clauses
in which A is premise
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining Example 49
● Process agenda item B
● Decrease count for horn clauses
in which B is premise
● A ∧ B Ô⇒ L has now fulfilled premise
● Add L to agenda
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining Example 50
● Process agenda item L
● Decrease count for horn clauses
in which L is premise
● B ∧ L Ô⇒ M has now fulfilled premise
● Add M to agenda
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining Example 51
● Process agenda item M
● Decrease count for horn clauses
in which M is premise
● L ∧ M Ô⇒ P has now fulfilled premise
● Add P to agenda
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining Example 52
● Process agenda item P
● Decrease count for horn clauses
in which P is premise
● P Ô⇒ Q has now fulfilled premise
● Add Q to agenda
● A ∧ P Ô⇒ L has now fulfilled premise
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining Example 53
● Process agenda item P
● Decrease count for horn clauses
in which P is premise
● P Ô⇒ Q has now fulfilled premise
● Add Q to agenda
● A ∧ P Ô⇒ L has now fulfilled premise
● But L is already inferred
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining Example 54
● Process agenda item Q
● Q is inferred
● Done
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward Chaining Algorithm 55
function PL-FC-E NTAILS ?(KB, q) returns true or false
inputs: KB, the knowledge base, a set of propositional Horn clauses
q, the query, a proposition symbol
local variables: count, a table, indexed by clause, init. number of premises
inferred, a table, indexed by symbol, each entry initially false
agenda, a list of symbols, initially the symbols known in KB
while agenda is not empty do
p ← P OP(agenda)
unless inferred[p] do
inferred[p] ← true
for each Horn clause c in whose premise p appears do
decrement count[c]
if count[c] = 0 then do
if H EAD[c] = q then return true
P USH(H EAD[c], agenda)
return false
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
56
backward chaining
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining 57
● Idea: work backwards from the query Q:
to prove Q by BC,
check if Q is known already, or
prove by BC all premises of some rule concluding q
● Avoid loops: check if new subgoal is already on the goal stack
● Avoid repeated work: check if new subgoal
1. has already been proved true, or
2. has already failed
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 58
● A and B are known to be true
● Q needs to be proven
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 59
● Current goal: Q
● Q can be inferred by P Ô⇒ Q
● P needs to be proven
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 60
● Current goal: P
● P can be inferred by L ∧ M Ô⇒ P
● L and M need to be proven
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 61
● Current goal: L
● L can be inferred by A ∧ P Ô⇒ L
● A is already true
● P is already a goal
⇒ repeated subgoal
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 62
● Current goal: L
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 63
● Current goal: L
● L can be inferred by A ∧ B Ô⇒ L
● Both are true
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 64
● Current goal: L
● L can be inferred by A ∧ B Ô⇒ L
● Both are true
⇒ L is true
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 65
● Current goal: M
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 66
● Current goal: M
● M can be inferred by B ∧ L Ô⇒ M
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 67
● Current goal: M
● M can be inferred by B ∧ L Ô⇒ M
● Both are true
⇒ M is true
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 68
● Current goal: P
● P can be inferred by L ∧ M Ô⇒ P
● Both are true
⇒ P is true
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Backward Chaining Example 69
● Current goal: Q
● Q can be inferred by P Ô⇒ Q
● P is true
⇒ Q is true
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Forward vs. Backward Chaining 70
● FC is data-driven, cf. automatic, unconscious processing,
e.g., object recognition, routine decisions
● May do lots of work that is irrelevant to the goal
● BC is goal-driven, appropriate for problem-solving,
e.g., Where are my keys? How do I get into a PhD program?
● Complexity of BC can be much less than linear in size of KB
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
71
resolution
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Resolution 72
● Conjunctive Normal Form (CNF—universal)
conjunction of disjunctions of literals
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
clauses
E.g., (A ∨ ¬B) ∧ (B ∨ ¬C ∨ ¬D)
● Resolution inference rule (for CNF): complete for propositional logic
`1 ∨ ⋯ ∨ `k , m1 ∨ ⋯ ∨ m n
`1 ∨ ⋯ ∨ `i−1 ∨ `i+1 ∨ ⋯ ∨ `k ∨ m1 ∨ ⋯ ∨ mj−1 ∨ mj+1 ∨ ⋯ ∨ mn
where `i and mj are complementary literals. E.g.,
P1,3 ∨ P2,2, ¬P2,2
P1,3
● Resolution is sound and complete for propositional logic
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Wampus World 73
● Rules such as: “If breeze, then a pit adjacent.“
B1,1 ⇔ (P1,2 ∨ P2,1)
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Conversion to CNF 74
B1,1 ⇔ (P1,2 ∨ P2,1)
1. Eliminate ⇔, replacing α ⇔ β with (α Ô⇒ β) ∧ (β Ô⇒ α).
(B1,1 Ô⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) Ô⇒ B1,1)
2. Eliminate ⇒, replacing α ⇒ β with ¬α ∨ β.
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬(P1,2 ∨ P2,1) ∨ B1,1)
3. Move ¬ inwards using de Morgan’s rules and double-negation:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ ((¬P1,2 ∧ ¬P2,1) ∨ B1,1)
4. Apply distributivity law (∨ over ∧) and flatten:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1)
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Resolution Example 75
● KB = (B1,1 ⇔ (P1,2 ∨ P2,1))
reformulated as:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1)
● Observation: ¬B1,1
● Goal: disprove: α = ¬P1,2
● Resolution
¬P1,2 ∨ B1,1 ¬B1,1
¬P1,2
● Resolution
¬P1,2 P1,2
false
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Resolution Example 76
● In practice: all resolvable pairs of clauses are combined
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Resolution Algorithm 77
● Proof by contradiction, i.e., show KB ∧ ¬α unsatisfiable
function PL-R ESOLUTION(KB, α) returns true or false
inputs: KB, the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
clauses ← the set of clauses in the CNF representation of KB ∧ ¬α
new ← { }
loop do
for each Ci, Cj in clauses do
resolvents ← PL-R ESOLVE(Ci, Cj )
if resolvents contains the empty clause then return true
new ← new ∪ resolvents
if new ⊆ clauses then return false
clauses ← clauses ∪ new
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Logical Agent 78
● Logical agent for Wumpus world explores actions
– observe glitter → done
– unexplored safe spot → plan route to it
– if Wampus in possible spot → shoot arrow
– take a risk to go possibly risky spot
● Propositional logic to infer state of the world
● Heuristic search to decide which action to take
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020
Summary 79
● Logical agents apply inference to a knowledge base
to derive new information and make decisions
● Basic concepts of logic:
– syntax: formal structure of sentences
– semantics: truth of sentences wrt models
– entailment: necessary truth of one sentence given another
– inference: deriving sentences from other sentences
– soundess: derivations produce only entailed sentences
– completeness: derivations can produce all entailed sentences
● Wumpus world requires the ability to represent partial and negated information,
inference to determine state of the world, etc.
● Forward, backward chaining are linear-time, complete for Horn clauses
● Resolution is complete for propositional logic
Philipp Koehn Artificial Intelligence: Logical Agents 5 March 2020