Propositional Logic
Bang Liu, Jian-Yun Nie
IFT3335: Introduction to Artificial Intelligence
Readings: AIMA 7.1⇠7.5
(Slides are adapted from Tian-Li Yu, National Taiwan University, Artificial Intelligence - Search & Logic)
Outline
1 Logical Agents
2 Propositional Logic
3 Inference
4 Resolution and CNF
5 Forward Chaining
6 Backward Chaining
Logical Agents
Generic Knowledge-Based Agent
Knowledge base is a set of sentences in a formal language.
Declarative approach to develop an agent: Tell it what it needs to
know.
KB-Agent(percept)
1 Tell(KB,Make-Percept-Sentence(percept, t))
2 action = Ask(KB,Make-Action-Query(t))
3 Tell(KB,Make-Action-Sentence(action, t))
4 t = t +1
5 return action
Logical Agents
Wumpus World
Logical Agents
Wumpus World (PEAS)
Performance measure: gold +1000, death -1000, -1 per move, and -10 for
using the arrow.
Environment: 4 ⇥ 4 grid. Agent starts at [1, 1], facing right. One gold and
one wumpus are uniformly randomly located. Any square can
be a pit with probability of 0.2.
Actuators: Forward, TurnLeft, TurnRight, Grab, Shoot (only
one arrow, going straight until it kills the wumpus or hits the
wall), Climb (only works at [1, 1]).
Sensors: Stench when adjacent to the wumpus. Breeze when
adjacent to a pit. Glitter when reaching the gold. Bump
when walking into a wall. Scream when the wumpus dies.
Logical Agents
Wumpus World
The agent sometimes needs to decide to go home empty-handed or
risk for the gold.
This environment does not guarantee the agent can always get the
gold.
If at [1, 1] the agent receives Breeze, the agent does not know which
direction to Forward is fatal and which is safe (can be both fatal).
With a probability of about 21%, the gold is in a pit, or surrounded by
pits.
The agent’s initial KB contains the rules.
It also knows it’s in [1, 1], and it’s a safe square (marked OK).
Logical Agents
Wumpus World
2nd step
st
1 step Percept: Breeze.
Percept: none. ! [2, 2] or [3, 1] are pits.
! [1, 2] and [2, 1] are OK. ! go back to [1, 1] then [1, 2].
Action: Forward. Action: TurnLeft, TurnLeft,
Forward, TurnRight, Forward.
Logical Agents
Wumpus World
3rd step 5th step
Percept: Stench. Percept: Stench, Glitter,
Breeze.
! [2, 2]: OK; [1, 3]: wumpus.
! [2, 4] or [3, 3]: pits; [2, 3]: gold.
! Kill wumpus; go to [2, 2].
! Get gold and go back.
Action: Shoot, TurnRight,
Forward. Action: Grab, . . . .
Logical Agents
A formal language consists of words whose letters are
taken from an alphabet and are well-formed according
Logics to a specific set of rules. A formal language is often
defined by means of a formal grammar, which consists
of its formation rules. Formal languages are languages
that are designed by people for specific applications.
Logics are formal languages. Tend to have strict syntax and not ambiguous.
Syntax defines the sentence structures in the language.
Semantics defines the meanings of sentences.
Semantics defines the truth of each sentence w.r.t. each possible world.
x + y = 4 is true in a world where x = 1 and y = 3.
We use the term model in place of possible world.
Models
If a sentence ↵ is true in model m,
m satisfies ↵.
m is a model of ↵.
m 2 M(↵), where M(↵) is the set of all models of ↵.
Logical Agents
Entailment and Models
Entailment
Knowledge base KB entails sentence ↵ if and only if ↵ is true in all
worlds where KB is true, denoted as:
KB |= ↵
(x + y = 4) |= (4 = x + y ).
x = 0 |= xy = 0.
Theorem: ↵ |= i↵ M(↵) ✓ M( ).
only if: 8m 2 M(↵), is true in m , 8m 2 M(↵), m 2 M( ) , M(↵) ✓ M( )
if: 8m 2 M(↵), m 2 M( ) , 8m where ↵ is true, is true , ↵ |= .
Logical Agents
Back to Wumpus World
[1, 1] percepts None.
[2, 1] percepts Breeze.
The agent wants to know unexplored adjacent squares [1, 2], [2, 2],
[3, 1] contains pits or not.
Logical Agents
The agent wants to know unexplored adjacent squares [1, 2], [2, 2],
[3, 1] contains pits or not.
23 = 8 possible models.
Consider two sentences: ↵1 : no pit in [1, 2]; ↵2 : no pit in [2, 2].
KB |= ↵1 ; KB 6|= ↵2 .
Propositional Logic
Propositional Logic
Proposition: a declarative sentence that is either true or false.
P = N P is a proposition.
“How are you?” is not.
Propositional logic usually does not consider time.
If the truth of a proposition varies over time, we call it fluent.
“Today is Monday” is a fluent.
Atomic propositions are minimum propositions.
Literals are atomic propositions or their negations (p or ¬p).
Propositional Logic
Propositional Logic
The following grammar in Backus-Naur form (BNF) for syntax.
Truth tables for semantics.
Sentence ! AtomicSentence|ComplexSentence
AtomicSentence ! True|False|P|Q|R| . . .
ComplexSentence ! (Sentence)|[Sentence]
() can change operator precedence
| ¬Sentence
| Sentence ^ Sentence
| Sentence _ Sentence
| Sentence ) Sentence
| Sentence , Sentence
Operator Precedence : ¬, ^, _, ), ,
not, and, or, implies, if and only if
Inference
Inference by Enumeration
TT-Entails(KB, ↵)
1 symbols = a list of the proposition symbols in KB and ↵
2 return TT-Check-All(KB, ↵, symbols, {})
TT-Check-All(KB, ↵, symbols, model)
1 if Empty(symbols)
2 if PL-True(KB, model)
3 return PL-True(↵, model) PL-TRUE returns true if
a sentence holds within a model
4 else return true
5 else
6 P = First(symbols) Check different models/worlds with different
7 rest = Rest(symbols) values assigned to symbols
8 return TT-Check-All(KB, ↵, rest, model [ {P = true})
and TT-Check-All(KB, ↵, rest, model [ {P = false})
Inference
Standard Logical Equivalences
Note that ≡ is used to make claims
↵⌘ i↵ ↵ |= and |= ↵. about sentences, while is used as a
part of a sentence
(↵ ^ ) ⌘ ( ^ ↵) commutativity of ^
(↵ _ ) ⌘ ( _ ↵) commutativity of _
((↵ ^ ) ^ ) ⌘ (↵ ^ ( ^ )) associativity of ^
((↵ _ ) _ ) ⌘ (↵ _ ( _ )) associativity of _
¬(¬↵) ⌘ ↵ double-negation elimination
(↵ ) ) ⌘ (¬ ) ¬↵) contraposition
(↵ ) ) ⌘ (¬↵ _ ) implication elimination
(↵ , ) ⌘ ((↵ ) ) ^ ( ) ↵)) biconditional elimination
¬(↵ ^ ) ⌘ (¬↵ _ ¬ ) De Morgan
¬(↵ _ ) ⌘ (¬↵ ^ ¬ ) De Morgan
(↵ ^ ( _ )) ⌘ ((↵ ^ ) _ (↵ ^ )) distributivity of ^ over _
(↵ _ ( ^ )) ⌘ ((↵ _ ) ^ (↵ _ )) distributivity of _ over ^
Inference
Validity and Satisfiability
A sentence is valid if it is true in all models.
True, A _ ¬A
Validity is connected to inference via the Deduction Theorem:
KB |= ↵ i↵ (KB ) ↵) is valid.
A sentence is satisfiable if it is true in some model.
A ^ B, A
A sentence is unsatisfiable if it is true in no model.
A ^ ¬A
Satisfiability is connected to inference via Reductio ad Absurdum
(proof by contradiction):
KB |= ↵ i↵ (KB ^ ¬↵) is unsatisfiable.
Inference
Inference
Inference: the way we derive conclusions with inference rules.
Inference i can derive ↵ from KB, denoted as
KB `i ↵
Soundness: i is sound if
(KB `i ↵) ) (KB |= ↵) i.e., any conclusion I can drive from KB is true
Completeness: i is complete if
(KB |= ↵) ) (KB `i ↵) i.e., If any sentence is entailed by KB, I can derive it.
For KB consisting of only propositional logic or first-order logic
(FOL), there exists a sound and complete inference procedure.
FOL is expressive enough to express many things in the real world.
Inference
Simple Knowledge Base Using Propositional Logic
Px,y is true there is a pit in [x, y ].
Wx,y is true there is a wumpus in [x, y ].
Bx,y is true if the agent perceives Breeze in [x, y ].
Sx,y is true if the agent perceives Stench in [x, y ].
KB
R1 : ¬P1,1 .
R2 : B1,1 , (P1,2 _ P2,1 ).
R3 : B2,1 , (P1,1 _ P2,2 _ P3,1 ).
R4 : ¬B1,1 .
R5 : B2,1 .
Inference
Inference of the Wumpus World
Biconditional elimination from R2 .
R6 : (B1,1 ) (P1,2 _ P2,1 )) ^ ((P1,2 _ P2,1 ) ) B1,1 ).
And-elimination from R6 .
R7 : (P1,2 _ P2,1 ) ) B1,1 .
Contrapositive from R7 .
R8 : ¬B1,1 ) ¬(P1,2 _ P2,1 ).
Modus Ponens from R8 .
R9 : ¬(P1,2 _ P2,1 ).
De Morgan’s rule from R9 .
R10 : ¬P1,2 ^ ¬P2,1 .
Resolution and CNF
Proof by Resolution
Convert a sentence into conjunctive normal form (CNF).
Conjunction of disjunctions of literals
| {z } .
clauses
(A _ ¬B) ^ (B _ ¬C _ D)
Resolution Inference Rule
` 1 _ · · · _ ` k , m1 _ · · · _ mn
,
`1 _ · · · `i 1 _ `i+1 _ · · · _ `k _ m1 _ · · · mj 1 _ mj+1 _ · · · _ mn
where `i and mj are complementary literals.
P1,1 _ P3,1 , ¬P1,1 _ ¬P2,2
Quiz:
P3,1 _ ¬P2,2
If you have 𝐴∨𝐵∨¬𝐶 and 𝐴∨¬𝐵∨𝐶, can you derive A?
Resolution and CNF
Conjunctive Normal Form (CNF) Conversion
B1,1 , P1,2 _ P2,1
1 Eliminate , by bidirectional elimination:
(B1,1 ) (P1,2 _ P2,1 )) ^ ((P1,2 _ P2,1 ) ) B1,1 )
2 Eliminate ) by (↵ ) ) ⌘ (¬↵ _ ):
(¬B1,1 _ (P1,2 _ P2,1 )) ^ (¬(P1,2 _ P2,1 ) _ B1,1 )
3 “Move ¬ inwards” by double-negation elimination and De Morgan:
(¬B1,1 _ (P1,2 _ P2,1 )) ^ ((¬P1,2 ^ ¬P2,1 ) _ B1,1 )
4 Distribute _ over ^ and “flatten”:
(¬B1,1 _ P1,2 _ P2,1 ) ^ (¬P1,2 _ B1,1 ) ^ (¬P2,1 _ B1,1 )
Resolution and CNF
Resolution Algorithm
Proof by contradiction — showing KB ^ ¬↵ is unsatisfiable.
PL-Resolution(KB, ↵)
1 clauses = the set of clauses in the CNF representation of KB ^ ¬↵
2 new =
3 repeat
4 for each pair of clauses Ci , Cj in clauses do
5 resolvents = PL-Resolve(Ci , Cj )
6 if resolvents contains the empty clause A∧¬A leads to empty clause
7 return true
8 new = new [ resolvents
9 if new ✓ clauses
10 return false
11 clauses = clauses [ new
Resolution and CNF
Resolution Example
KB : B1,1 , P1,2 _ P2,1 .
KB(CNF ) : (¬B1,1 _ P1,2 _ P2,1 ) ^ (¬P1,2 _ B1,1 ) ^ (¬P2,1 _ B1,1 ).
After we know (add into KB) ¬B1,1 , we’d like to assert ↵ = ¬P1,2 .
PL-Resolution resolves KB ^ ¬↵ to the empty clause.
Resolution and CNF
Resolution is Sound and Complete
Soundness is not surprising since inference rules are sound (check the
truth table).
Resolution is also complete.
Resolution closure RC (S) of a set of clauses S: the set of all clauses
derivable by resolution.
Final value of clauses in PL-Resolution is RC (S).
RC (S) is finite, and hence PL-Resolution always terminates.
Ground Resolution Theorem
S is unsatisfiable ) RS(S) contains the empty clause.
Resolution and CNF
Ground Resolution Theorem
Proof by Contrapositive.
RC (S) does not contains the empty set ) S is satisfiable.
If 62 RC (S), construct a model for S with suitable values for literals
P1 , . . . , Pk :
For i from 1 to k,
If a clause in RC (S) contains ¬Pi and all its other literals are false,
assign false to Pi .
Otherwise, assign true to Pi .
For a clause in S to be close to false, it must be either
(false _ · · · false _ Pi ) or (false _ · · · false _ ¬Pi ).
However, our assignment will make the clause to be true. Therefore, such
assignment is a model of S.
Resolution and CNF
Horn and Definite Clauses
The completeness of resolution is good.
For many real-world applications, if we add some restrictions, more
efficient inference can be achieved.
Definite clause: a disjunction of literals where exactly one is positive.
Horn clause: a disjunction of literals where at most one is positive.
Horn clauses are closed under resolution:
Resolving two Horn clauses yields a Horn clause.
Another way to view Horn clauses: Horn clauses means entailment:
¬A∨¬B∨C
True ) symbol.
≡¬(A∧B)∨C
(Conjunction of symbols) ) symbol. ≡(A∧B) C
Deciding entailment with Horn clauses can be done in linear time!
Forward and backward chaining.
Forward Chaining
Forward Chaining
Resolution for Horn clauses (Modus Ponens):
↵1 , · · · , ↵n , (↵1 ^ · · · ^ ↵n ) )
Main idea:
Counts the unknown premises in all clauses.
Decreases the count if a premise is known.
When a count becomes zero, the conclusion is added as a known fact.
Record the inferred symbols to avoid redundant work
(P ) Q, Q ) P).
Forward Chaining
Forward Chaining
PL-FC-Entails(KB, q)
count: number of symbols in c’s premise.
agenda: a queue of symbols, initially known facts in KB.
1 while agenda is not empty do
2 p = Pop(agenda)
3 if p = = q
4 return true
5 if inferred[p] = = false
6 inferred[p] = true
7 for each clause c in KB where p is in c.premise
8 decrement count[c]
9 if count[c] = = 0
10 add c.conclusion to agenda
11 return false
Forward Chaining
Forward Chaining Example
Fire any rule whose premises are satisfied in the KB, add its
conclusion to the KB, until query is found.
KB Step 1 Step 2 Step 3
P )Q
P )Q 1 P )Q 1 P )Q 1
L^M )P
L^M )P 2 L^M )P 2 L^M )P 2
B ^L)M
B ^L)M 2 B ^L)M 2 B ^L)M 1
A^P )L
A^P )L 2 A^P )L 1 A^P )L 1
A^B )L
A^B )L 2 A^B )L 1 A^B )L 0
A
agenda : [A, B] agenda : [B] agenda : [L]
B
Forward Chaining
Forward Chaining Example
Fire any rule whose premises are satisfied in the KB, add its
conclusion to the KB, until query is found.
Step 4 Step 5 Step 6
P )Q 1 P )Q 1 P )Q 0
L^M )P 1 L^M )P 0 L^M )P 0
B ^L)M 0 B ^L)M 0 B ^L)M 0
A^P )L 1 A^P )L 1 A^P )L 0
A^B )L 0 A^B )L 0 A^B )L 0
agenda : [M] agenda : [P] agenda : [Q, L]
Forward Chaining
Completeness of Forward Chaining
FC reaches a fixed point where no new inferences are possible.
Consider a model m which assigns true to every symbol inferred and
false to others.
Every clause in KB is true in m:
Proof.
- Suppose ↵1 ^ · · · ^ ↵k ) is false in m.
- ↵1 ^ · · · ^ ↵k is true and is false.
- FC has not reached a fixed point.
m 2 M(KB) (m is a model of KB)
If KB |= q, q is true in m.
Therefore, FC derives every atomic sentence that is entailed by KB.
Backward Chaining
Backward Chaining
Work backward from Q.
If Q is known, done.
Otherwise, check it’s premise P.
Next step checks L and M.
Identical to the And-Or-Graph-Search in
the textbook.
Backward Chaining
Backward Chaining Example
Blue ring: Checking.
Blue circle: Checked.
Red circle: Facts.
Backward Chaining
Backward Chaining Example
Backward Chaining
Backward Chaining Example
Backward Chaining
Backward Chaining Example
Backward Chaining
Forward vs. Backward Chaining
Both time complexities are linear to the size of KB.
Forward chaining is data-driven.
Backward chaining is goal-driven.
In general, backward chaining is more appropriate for problem solving
Where’s my key?
How to pass this course?
Forward chaining may generate many conclusions irrelevant to the
goal.
In general, time complexity of backward chaining is much less than
linear of the size of KB.
Backward Chaining
Pros and Cons of Propositional Logic
Pro
Propositional logic is declarative: pieces of syntax correspond to facts.
Propositional logic allows partial/disjunctive/negated information
(unlike most data structures and databases).
Propositional logic is compositional:
Meaning of B1,1 ^ P1,2 is derived from meaning of B1,1 and of P1,2 .
Meaning in propositional logic is context-independent (unlike natural
language, where meaning depends on context)
Con
Propositional logic has very limited expressive power.
Cannot say “pits cause breezes in adjacent squares” except by writing
one sentence for each square.
Backward Chaining
Summary
Knowledge base contains sentences in a knowledge representation
language.
A representation language is defined by its syntax and semantics,
which defines the truth of each sentence in each model.
↵ entails if is true in all models where ↵ is true. Equivalent
definitions: validity of ↵ ) ; unsatisfiability of ↵ ^ ¬ .
Sound inferences derive only sentences that are entailed; complete
inferences derive all sentences that are entailed.
Resolution is sound and complete inference for propositional logics,
where KB can be expressed by CNF.
Forward and backward chaining are sound and complete for KB in
Horn form (more restrict than propositional logics).