Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
31 views22 pages

Unit 2 CSP

UNIT-2-CSP

Uploaded by

phdgdk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views22 pages

Unit 2 CSP

UNIT-2-CSP

Uploaded by

phdgdk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Artificial Intelligence

Chapter 7: Logical Agents


Andreas Zell

After the Textbook: Artificial Intelligence


Intelligence,
A Modern Approach
by Stuart Russel and Peter Norvig (3rd Edition)

7. Logical Agents

• 7.1 Knowledge-Based Agents


• 7.2 The Wumpus World
• 7.3 Logic
• 7.4 Propositional Logic
• 7.5 Propositional Theorem Proving
• 7.6 Effective Propositional Model Checking
• 7.7 Propositional Logic Agents
• 7.8 Summary

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 2

1
7.1 Knowledge-Based Agents

• Logical agents are always definite – each


proposition is either true/false or unknown
(agnostic)
• Knowledge representation language – a
language used to express knowledge about the
world
• Declarative approach – language is designed to be
able to easilyy express
p knowledge
g for the world the
language is being implemented for
• Procedural approach – encodes desired behaviors
directly in program code

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 3

7.1 Knowledge-Based Agents

• Sentence – a statement expressing a truth about


the world in the knowledge representation
language
• Knowledge Base (KB) – a set of sentences
describing the world
• Background knowledge – initial knowledge in KB
• Knowledge level – we only need to specify what the
agent
g knows and what its ggoals are in order to specify
p y
its behavior
• Tell(P) – function that adds knowledge P to the KB
• Ask(P) – function that queries the agent about the
truth of P
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 4

2
7.1 Knowledge-Based Agents

• Inference – the process of deriving new


sentences from the knowledge base
• When the agent draws a conclusion from available
information, it is guaranteed to be correct if the
available information is correct

function KB-Agent( percept) returns an action


persistent: KB, a knowledge base
t, a counter, initially 0, indicating time
Tell(KB Make
Tell(KB, Make-Percept-Sentence(percept
Percept Sentence(percept, t))
action Ask(KB, Make-Action-Query(t))
Tell(KB, Make-Action-Sentence(action, t))
t t+1
return action
A generic knowledge-based agent
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 5

7.2 The Wumpus World

• Environment
• Squares adjacent to
wumpus are smelly
• Squares
S adjacent
dj t tto pitit are
breezy
• Glitter iff gold is in the
same square
• Shooting kills wumpus if
you are facing it
• Shooting uses up the only
arrow
• Grabbing picks up gold if in
same square
• Releasing drops the gold in
same square
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 6

3
7.2 The Wumpus World

• Performance measure
• gold +1000,
• PIT/wumpus -1000
• -1
1 per action,
ti
• -10 for using the arrow
• Actuators:
• TurnLeft (90°),
• TurnRight (90°),
• Forward,
• Grab (gold),
• Shoot (arrow),
• Climb (at 1,1)
• Sensors:
• Stench, Breeze, Glitter,
Bump, Scream
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 7

7.2 The Wumpus World

• Observable? No – only local perception


• Deterministic? Yes – outcomes exactly specified
• Episodic?
? No – sequential at the level off
actions
• Static? Yes – Wumpus and Pits do
not move
• Discrete? Yes
• Single-agent? Yes – Wumpus is essentially a
natural feature

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 8

4
7.2 The Wumpus World
First percept at [1,1] Percept at [2,1]
[None, None, None, None, None] [None, Breeze, None, None, None]
Stench, Breeze, Glitter, Bump, Scream

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 9

7.2 The Wumpus World


Percept at [1,2] Percept at [2,3]
[Stench, None, None, None, None] [Stench, Breeze, Glitter, None, None]

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 10

5
7.3 Logic

• Logics – formal languages for representing


information such that conclusions can be drawn
• Syntax – description of a representative
language in terms of well-formed sentences of
the language
• Semantics – defines the “meaning” (truth) of a
sentence in the representative language w.r.t.
each possible world
• Model
M d l – the
th world
ld b
being
i d described
ib d b
by a KB
• Satisfaction – model m satifies a sentence α, if α
is true in m

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 11

7.3 Logic

• Entailment – the concept that a sentence follows


from another sentence:
• α╞β if α is true,
true then β must also be true.
true
• Logical inference – the process of using
entailment to derive conclusions
• Model checking – enumeration of all possible
models to ensure that a sentence α is true in all
models in which KB is true
• M(α) is the set of all models of α

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 12

6
7.3 Logic

KB = wumpus-world rules + observations


α1 = “[1,2] is safe”, KB ╞ α1, proved by model checking

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 13

7.3 Logic

KB = wumpus-world rules + observations


α2 = “[2,2] is safe”, KB α2

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 14

7
7.3 Logic

• If an inference algorithm i can derive α from KB we


write KB ├i α.
• Sound (truth-preserving) inference – an inference
algorithm that derives only entailed sentences
• if KB is true in the real world, then any sentence α derived
from KB by a sound inference procedure is also true in the
real world
• Complete inference procedure – an inference proc.
that can derive any sentence that is entailed
• Grounding – the connection between logical
reasoning processes and the real environment in
which the agent exists
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 15

7.4 Propositional Logic

• Atomic sentence – consists of a single


propositional symbol, which is True or False
• Complex sentence – sentence constructed from
simpler sentences using parentheses and
logical connectives:
• ¬ (not) – negation Highest priority

• ˄ (and) – conjunction
• ˅ ((or)) – disjunction
di j ti
• (implies) – implication (premise=>conclusion)
• (if and only if) – biconditional Lowest priority

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 16

8
7.4 Propositional Logic

• Truth table – a (simple) representation of a


complex sentence by enumerating its truth in
terms of the possible values of each of its
symbols.
• Truth table for connectives:
P Q ¬P P˄Q P˅Q P=>Q P<=>Q
false false true false false true true
false true true false true true false
true false false false true false false
true true false true true true true

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 17

7.4 Propositional Logic

• Wumpus World Symbols:


• Px,y is true if there is a pit in [x,y]
• Wx,y is true if there is a wumpus in [x,y]
• Bx,y is tr
true
e if there is a bree
breeze e in [[x,y]]
• Sx,y is true if there is a stench in [x,y]

• Sentences Ri:
• No pit in [1,1]
• R1: ¬P1,1
• Pits cause breezes in adjacent squares
• R2: B1,1 (P1,2 ˅ P2,1)
• R3: B2,1 (P1,1 ˅ P2,2 ˅ P3,1)
• For first two squares
• R4: ¬B1,1
• R5: B2,1

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 18

9
7.4 Propositional Logic by Model Checking

B1,1 B2,1 P1,1 P1,2 P2,1 P2,2 P3,1 R1 R2 R3 R4 R5 KB


false false false false false false false true true true true false false
false false false false false false true true true false true false false
...

...

...

...

...

...

...

...

...

...

...

...

...
false true false false false false false true true false true true false
false true false false false false true true true true true true true
false true false false false true false true true true true true true
false true false false false true true true true true true true true
false true false false true false false true false false true true false
.
...

.
...

.
...

.
...

.
...

.
...

.
...

.
...

.
...

.
...

.
...

.
...

.
...
true true true true true true true false true true false true false
Fig. 7.8: Truth Table for Wumpus World KB, consisting of 27 = 128 rows, one
each for the different assignments of truth values to the 7 proposition symbols
B1,1, …, P3,1. KB is true if R1 through R5 are true, which occurs just in 3 rows.

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 19

7.4.4 Propositional Model Checking

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 20

10
7.5 Propositional Theorem Proving

• Knowledge Base can be represented as a conjunction of


all its statements since it asserts that all statements are
true.
• Every known inference algorithm for propositional logic
has a worst-case complexity exponential in the size of
the input.
• Logical equivalence – two sentences α and β are
logically equivalent if they are true in the same set of
models.
• Validity – a sentence is valid if it is true in all models.
• Valid sentences are also called tautologies – sentences
that are necessarily true.

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 21

7.5 Propositional Theorem Proving

• Deduction Theorem – For any sentences α and


β, α╞ β if and only if the sentence (α β) is valid.
• Satisfiablility – a sentence is satisfiable if it is
true in some model.
• Determining satisfiablity in propositional logic is NP-
complete.
• Proof by contradiction: α╞ β if and only if the
sentence ¬(α β) or rather (α ˄ ¬β) is unsatisfiable.
• Inferentially equivalent – two sentences α and β
are inferentially equivalent if the satisfiablity of α
implies the satisfiablity of β and vice versa.

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 22

11
7.5 Propositional Theorem Proving

Fig. 7.11 Standard logical equivalences. The symbols α,β and γ stand for
arbitrary sentences of propositional logic

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 23

7.5 Propositional Theorem Proving

• Inference rules used to derive a proof


• Common Patterns:
• Modus
M d P Pones    ,

• And-Elimination   

• Finding a proof can be efficient since irrelevant
propositions can be ignored
ignored.
• Monotonicity says that the set of entailed
sentences can only increase as information is
added to KB
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 24

12
7.5 Propositional Theorem Proving

Example to prove ¬P1,2 from R1 through R5:


• Applying biconditional elimination to R2 to obtain
R6: (B1,1
11 (P1,2
1 2 ˅ P2,1
2 1)) ˄ ((P1,2
1 2 ˅ P2,1
2 1) B1,1
1 1)
• Applying And-Elimination to obtain
R7: ((P1,2 ˅ P2,1) B1,1)
• Contraposition gives
R8: (¬B1,1 ¬(P1,2 ˅ P2,1))
• Modus Ponens with R8 and the percept ¬B1,1 gives
R9: ¬(P
(P1,2 ˅ P2,1)
• De Morgan’s rule gives
R10: ¬P1,2 ˄ ¬P2,1
that is, neither P12 nor P21 contains a pit.

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 25

7.5 Propositional Theorem Proving

Conjunctive Normal Form (CNF) – every sentence of


propositional logic is logically equivalent to a conjunction
of clauses. E.g. Convert B1,1 , (P1,2
, ˅ P2,1
, ) to CNF:
1. Eliminate , replacing α β with (α β) ˄ (β α)
(B1,1 (P1,2 ˅ P2,1)) ˄ ((P1,2 ˅ P2,1) B1,1)
2. Eliminate , replacing α β with ¬ α ˅ β
(¬B1,1 ˅ P1,2 ˅ P2,1) ˄ (¬(P1,2 ˅ P2,1) ˅ B1,1)
3. Move ¬ inwards using de Morgan's rules and double-
negation
ti
(¬B1,1 ˅ P1,2 ˅ P2,1) ˄ ((¬P1,2 ˄ ¬P2,1) ˅ B1,1)
4. Apply distributivity law (˅ over ˄) and flatten
(¬B1,1 ˅ P1,2 ˅ P2,1) ˄ (¬P1,2 ˅ B1,1) ˄ (¬P2,1 ˅ B1,1)
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 26

13
7.5 Propositional Theorem Proving

Resolution algorithm: Proof by contradiction, i.e., show KB ˄ ¬α unsatisfiable

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 27

7.5 Propositional Theorem Proving

Resolution Example from Wumpus World

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 28

14
7.5 Propositional Theorem Proving

• Definite clause – disjunction of literals, of which exactly


one is positive e.g. ¬P1 ˅ ¬P2 ˅ ¬P3 ˅ P4
• Horn clause – a disjunction
j of literals at most one of
which is positive e.g. ¬P1 ˅ ¬P2, or ¬P3 ˅ P4
• Can be used with forward chaining or backward chaining
• Deciding entailment is linear in the size of KB
• Goal clause – a clause with no positive literals, ¬P1˅¬P2
• Forward chaining – a sound and complete inference
algorithm that is essentially Modus Ponens
• data-driven reasoning; reasoning which starts from known data
• Backward chaining – goal-directed reasoning; reasoning
that works backward from goal
• Often works in much less than linear as it avoids redundant facts
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 29

7.5 Propositional Theorem Proving

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 30

15
7.5 Propositional Theorem Proving

A set of Horn Clauses

P Q ¬P
P˅Q
L˄M P ¬L ˅ ¬M ˅ P
B˄L M ¬B ˅ ¬L ˅ M
A˄ P L ¬A ˅ ¬P ˅ L
A˄ B L ¬A ˅ ¬B ˅ L
A A
B B

And the corresponding


AND-OR graph:

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 31

7.5 Propositional Theorem Proving


with Resolution
KB = ¬P˅Q, ¬L˅¬M˅P, ¬B˅¬L˅M, ¬A˅¬P˅L, ¬A˅¬B˅L, A, B
Question: KB ╞ Q ?
KB ╞ Q if and only if KB, ¬Q ╞ false
So ¬Q, ¬P˅Q, ¬L˅¬M˅P, ¬B˅¬L˅M, ¬A˅¬P˅L, ¬A˅¬B˅L, A, B

¬P

¬L˅¬M

¬L˅¬B˅¬L (factoring, elimination of duplicate Literals)

¬A˅¬B Unit resolution (li are literals):


l1 ˅ l2 ˅ … li ˅… ˅ ln , ¬li
¬B
l1 ˅ l2 ˅ … li ˅… ˅ ln
false (empty clause) (li deleted)
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 32

16
7.5 Propositional Theorem Proving
with Resolution
Implicit “and”
• Full Resolution:
l1 ˅ l2 ˅ … li ˅… ˅ lk , m1 ˅ m2 ˅ … mj ˅… ˅ mn

l1 ˅ l2 ˅ … li ˅… ˅ ln ˅ m1 ˅ m2 ˅ … mj ˅… ˅ mn
“or”

where the li and mj are complementary literals.


Multiple copies of a literal are reduced to one (factoring).
Examples:
• ¬P˅Q,
P˅Q ¬L˅¬M˅P
L˅ M˅P ¬B˅¬L˅M,
B˅ L˅M ¬B˅¬P˅L
B˅ P˅L
Q˅¬L˅¬M ¬B˅M˅¬B˅¬P
• ¬A˅B, ¬A˅C factoring
cannot be resolved

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 33

7.6 Effective Propositional Model


Checking
• Davis-Putnam algorithm (DPLL) – an algorithm for
checking satisfiability based on the fact that satisfiability
is commutative. Essentially, it is a DFS method of model
checking.
• Fundamental algorithm:
DP(clauses, symbols, model)
• If (all clauses are true in model) return true;
• If (there is a false clause in model) return false;
• P = next unassigned symbol in symbols;
• return DP (clauses, symbols, model + {P / true} ) OR
DP (clauses, symbols, model + {P / false} );

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 34

17
7.6 Effective Propositional Model
Checking

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 35

7.6 Effective Propositional Model


Checking
• Heuristics in the Davis-Putnam algorithm:
• Early termination – short-circuit logical evaluations.
A clause is true if any literal in it is true.
A sentence is false if any clause in it is false
false.
• Pure symbol heuristic – a symbol that appears with the same
sign in all clauses of a sentence (all positive literals or negative
ones).
• Making these literals true can never make a clause false. Hence,
pure symbols are fixed respectively.
• Unit clause heuristic – assignment of true to unit clauses.
• unit clause – a clause in which all literals but one have been
assigned false.
• unit propagation – assigning one unit clause creates another
causing a cascade of forced assignments.

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 36

18
7.6 Effective Propositional Model
Checking
• Tricks to scale up to large SAT problems:
• Component Analysis (and working on each
component separately)
• Variable and value ordering (choosing the variable
that appears most often in remaining clauses)
• Intelligent backtracking (backing up all the way to the
relevant conflict)
• Random restarts (reduces the variance on the time to
solution)
l ti )
• Clever indexing (with dynamic indexing structures).

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 37

7.6 Effective Propositional Model


Checking
• WalkSAT – a local search algorithm based on the idea of
a random walk.
• Initial assignment is chosen randomly.
• Repeat until satisfied or “exhausted”.
• A min-conflicts heuristic (as with CSPs) is used to minimize the
number of unsatisfied clauses.
• A random walk step chooses the symbol to flip.
• If a satisfying assignment exists, it will be found,
eventually.
• WalkSAT can not guarantee a sentence is unsatisfiable
except with high probability.

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 38

19
7.6 Effective Propositional Model Checking

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 39

7.6 Effective Propositional Model Checking

Hard Satisfiablility
• Let m be the number of clauses and n
be the number of symbols.
• The ratio m/n is indicative of the difficulty
off the
th problem.
bl
• The probability for satisfiability drops
sharply around m/n = 4.3.
• underconstrained – relatively small m/n
thus making the expected number of
satisfying assignments high.
• overconstrained – relatively high m/n
thus making the expected number of
satisfying assignments low.
• critical point – value of m/n such that the
problem is nearly satisfiable and nearly
unsatisfiable. Thus, the most difficult
cases for satisfiability algorithms

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 40

20
7.7 Propositional Logic Agents

• Inference-based agent – an agent that maintains


a knowledge base of propositions and uses the
inference procedures described above for
reasoning.
• It is beyond the power of propositional logic to
efficiently express statements that are true for sets of
objects.
• A proliferation of clauses occurs due to the fact that a
diff
different set off clauses
l is
i needed
d d for
f each h step in
i
time.

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 41

7.7 Propositional Logic Agents

• Circuit-based agent – a reflex agent in which percepts are inputs to


a sequential circuit – a network of gates (logical connectives) and
registers (store truth value of a single proposition)
• dataflow – at each time step,
step the inputs are set for that time step and
signals propagate through the circuit.
• delay line – implements internal state by feeding output of a register
back into the register as input at the next time step. The delay is
represented as a triangular gate.
• Circuits can only ascribe true/false values to a variable; no unknowns.
• requires each variable be represented by 2 knowledge propositions;
1 if the variable is known and the other for the value if known.
• locality – the property of models in which the truth of each proposition
can be determined by a constant number of other propositions.
• acyclicity – a circuit such that every cyclical path has a time delay; a
requirement for physical implementation.
• Circuit agents have trouble representing interlocking dependencies 
incomplete.
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 42

21
7.7 Propositional Logic Agents

• Tradeoffs:
• Conciseness – circuit agents do not need separate copies of
knowledge at each point in time whereas inference agents do.
• Computational Efficiency – In worst case, inference is
exponential in the number of symbols whereas circuit executes
linearly in its size.
• Completeness – An inferential agent is complete whereas a
complete circuit-based agent becomes exponentially large in the
worst case.
• Ease of Construction – Building small, acyclic, not-too-
incomplete circuits is relatively hard to building a declarative
description.
• Hybrid agent – tries to get the best of both worlds by
implementing reflexes with circuit agents and performing
inference when needed for more difficult reasoning.
Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 43

7.7 Propositional Logic Agents

Zell: Artificial Intelligence (after Russel/Norvig, 3rd Ed.) 44

22

You might also like