Artificial Intelligence
UNIT -3: Knowledge and Reasoning
✓ Logical agents
✓ Knowledge based agents
✓ The wumpus world
✓ Logic
✓ Propositional logic - a very simple logic
✓ Reasoning patterns in propositional logic
✓ Effective propositional inference
✓ Agents based on propositional logic
• An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon that
environment through actuators.
• A human agent has eyes, ears, and other organs for
sensors and hands, legs, mouth, and so on for
actuators.
• A robotic agent might have cameras and infrared range
finders for sensors and various motors for actuators.
• A software agent receives keystrokes, file contents, and
network packets as sensory inputs and acts on the
environment by displaying on the screen, writing files,
and sending network packets.
• Each time the agent program is called, it does three things.
– it TELLs the knowledge base what it perceives.
– it ASKs the knowledge base what action it should perform.
– it TELLs the knowledge base which action was chosen, and the
agent executes the action.
• MAKE-PERCEPT-SENTENCE constructs a sentence asserting that the
agent perceived the given percept at the given time.
• MAKE-ACTION-QUERY constructs a sentence that asks what action
should be done at the current time.
• MAKE-ACTION-SENTENCE constructs a sentence asserting that the
chosen action was executed.
• Agents can be viewed at the knowledge level - i.e., what they know,
regardless of how implemented
• There are two approaches to build a knowledge-based agent
– Declarative approach
Starting with an empty knowledge base, the agent designer can TELL
sentences one by one until the agent knows how to operate in its
environment.
– Procedural approach
the procedural approach encodes desired behaviors directly as
program code.
• a successful agent often combines both declarative and procedural
elements in its design.
• Performance measure
• gold +1000, death -1000
• -1 per step, -10 for using the arrow
• Environment: 4 x 4 grid of rooms
• Squares adjacent to wumpus are smelly
• Squares adjacent to pit are breezy
• Glitter iff gold is in the same square
• Shooting kills wumpus if you are facing it
• Shooting uses up the only arrow
• Grabbing picks up gold if in same square
• Releasing drops the gold in same square
• Sensors:
Stench, Breeze, Glitter, Bump(hit a wall), Scream (shot Wumpus)
The percepts will be given to the agent program in the form of a list of five
symbols; [Stench, Breeze, Glitter, Bump, Scream]
• Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot
Exploring the Wumpus World
Initial situation:
Agent in 1,1 and percept is
[None, None, None, None,
None]
From this the agent can
infer the neighboring
squares are safe (otherwise
there would be a breeze or a
stench)
Exploring a wumpus world
Exploring a Wumpus world
After one move, the percept
[None, Breeze, None, None, None].
Exploring a wumpus world
Exploring a wumpus world
After the third move,
The percept [Stench, None, None, None, None].
Exploring a wumpus world
Exploring a wumpus world
Exploring a wumpus world
Exploring a wumpus world
Now the percept is [Stench, Breeze,Glitter, None, None].
Exploring a wumpus world
In each case where the agent draws a conclusion from the available
Information, that conclusion is guaranteed to be correct if the available
Information is correct…
This is a fundamental property of logical reasoning
try it
Logic
✓ In AI, Logic is used for knowledge representation.
✓ Logic is a notation or method used to represent the knowledge.
✓ Logic defines the syntax and semantics of representation language.
✓ Syntax:
✓ Syntax defines the sentences in the language.
✓ We know that knowledge base is a collection of sentences. These
sentences are expressed according to the syntax of the
representation language.
✓ The syntax specifies that all sentences are well formed.
✓ Example: In arithmetic
✓ x + y = 4 is a well formed sentence
✓ x 4 y + = is not a sentence.
✓ Semantics:
✓ Semantics defines the meaning of a sentence.
✓ The semantics of the language defines the truth of each sentence
with respect to possible world.
✓ Example:
✓ x + y = 4 is true in a world where x=2 and y = 2
✓ x + y = 4 is false in a world when x=1 and y=1
✓ In standard logics every sentence must be either true or false in
each possible world.
✓ Model:
✓ We use model in place of Possible world
✓ Possible worlds are real environments that the agent may or may
not be in.
✓ Models are mathematical abstractions, each of which simply fixes
the truth or false of every relevant sentence.
✓ The phrase “m is a model of ” means that sentence is true in
model m.
✓ We use the notation M(α) to refer the set of all models of α.
✓ Entailment:
✓ Entailment means that a sentence follows logically from another
sentence.
✓ In mathematical notation, entailment is represented as α |= β
✓ The formal definition of entailment is this:
α |= β if and only if, in every model in which α is true, β is also true.
✓ Using the notation just introduced, we can write
✓ α |= β if and only if M(α) ⊆ M(β) .
Logic
consider the situation after detecting nothing in[1,1] and a breeze in
[2,1]. these percepts combined with agents knowledge of the rules of
the Wumpus world constitute the KB
The agent is interested in whether the adjacent squares [1,2], [2,2], and
[3,1] contain pits.
Each of the three squares might or might not contain a pit, so there are
23 =8 possible models.
let us consider two possible conclusions:
α1 = “There is no pit in [1,2].”
α2 = “There is no pit in [2,2].”
the models of α1 and α2 are marked with dotted lines in Figures (a) and (b),
respectively.
By inspection, we see the following:
in every model in which KB is true, α1 is also true.
Hence, KB |= α1: there is no pit in [1,2].
We can also see that in some models in which KB is true, α2 is false.
Hence, KB |= α2: the agent cannot conclude that there is no pit in [2,2].
Logical inference
• The process of deriving new conclusions from the available
information or from known facts or from KB
• entailment can be applied to carry out logical inference
• model checking is an example of inference algorithm
• model checking enumerates all possible models and check whether α
is true
• if an inference algorithm i can derive α from KB we write
KB ├ iα
• which is pronounced as α is derived from KB by i or i derives α from KB
✓ Properties of inference algorithms:
✓ Soundness
✓ an inference algorithm that derives only entailed sentence are sound
✓ Completeness
✓ An inference algorithm is complete if it derives any sentence that is
entailed
✓ Grounding
✓ the connection between logical reasoning processes and the real
environment in which the agent exists.
Propositional Logic
✓ Proposition means a statement that affirms or denies something and is either true
or false.
✓ Propositional logic is a simple statement logic. It is also called boolean logic.
✓ Propositional logic is used to decide the truth value of a given sentence.
✓ Syntax
✓The syntax of propositional logic defines the allowable sentences
✓There are two types of sentences
✓ Atomic sentence
✓ Complex sentence
✓ Atomic Sentence
✓ Atomic sentences consist of single propositional symbol. Each symbol
stands for a proposition that can be true or false.
✓ We use the symbols that start with an uppercase letter and may contain
other letters or subscripts.
✓ For example: P, Q, R, W1,3 and North.
✓ We may use w1,3 to stand for the proposition that wumpus in [1,3].
✓ There are two proposition symbols with fixed meanings: True is the always
true proposition and False is the always-false proposition.
Propositional Logic
Complex Sentence
✓ Complex sentences are constructed from simpler sentences, using parentheses
and logical connectives.
✓ There are five connectives in common use:
1. ¬ (not) : Represents the negation of a given sentence.
✓ Example
✓ W1,3 : There is a wumpus in square [1,3]
✓ ¬W1,3 : There is no wumpus in square [1,3]
2. ∧ (and) or conjunction :
✓A sentence whose main connective is ∧, such as W1,3 ∧ P3,1, is a
conjunction. Its parts are the conjuncts.
3. ∨ (or) disconjunction :
✓A sentence using ∨, such as W1,3 ∨ P3,1, is a disjunction, Its parts are the
disjuncts
4. ⇒ (implies) or implication : Implication is represented P ⇒ Q
✓ Its left side part is called premise or antecedent and its right side part is called
conclusion or consequent.
✓ Implications are also called as rules or if-then statements
Propositional Logic
5.⇔ (if and only if) or biconditional :
✓ It is represented as P ⇔ Q
✓ The sentence W1,3 ⇔ ¬W2,2 is a biconditional
✓ A formal grammar of sentences in propositional logic is shown
below:
Propositional Logic
Semantics:
✓ The semantics defines the rules for determining the truth of a sentence with
respect to a particular model.
✓ In propositional logic, a model simply assigns the truth value true or false for
every proposition symbol.
For example:
✓ if the sentences in the knowledge base make use of the proposition symbols
P1,2, P2,2, and P3,1, then one possible model is
m1 = {P1,2 =false, P2,2 =false, P3,1 =true}
✓ The semantics for propositional logic must specify how to compute the truth
value of any sentence, given a model.
✓ For atomic sentences:
✓ True is true in every mode and False is false in every model
✓ The truth value of every proposition symbol must be specify directly in
the model’
Propositional Logic
✓ For complex sentences
✓ We have five rules, which hold for anay sub-sentences P and Q in any
model m.
▪ ¬P is true iff P is false in m.
▪ P ∧ Q is true iff both P and Q are true in m.
▪ P ∨ Q is true iff either P or Q is true in m.
▪ P ⇒ Q is true unless P is true and Q is false in m.
▪ P ⇔ Q is true iff P and Q are both true or both false in m.
✓ Truth tables for the five logical connectives are given below:
Propositional Logic – Simple KB
✓ Let us consider for each i,j
✓ Let Pi,j be true if there is a Pit in [i,j]
✓ Let Bi,j be true if there is a breeze in [i.j]
✓ The KB includes the following sentences, each one labelled for convenience
✓ R1 : ¬P1,1
✓ R2 : B1,1 ⇔ (P1,2 ∨ P2,1)
✓ R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
✓ R4 : ¬B1,1
✓ R5 : B2,1
A simple inference procedure
• the aim of logical inference is to decide whether KB |= α
• for ex. is P2,2 entailed?
• the first algorithm for inference will be a direct implementation of the definition of
entailment
• i.e., enumerate the models and check that α is true in every model in which KB is
true
• for propositional logic models are assignments of true/false to every proposition
symbol
• In Wumpus world the relevant proposition symbols are B1,1 B2,1, P1,1 , P1,2, P2,1
P2,2 and P3,1 with seven symbols there are 2^7=128 possible models which is
shown in the following table
• KB is true if R1 through R5 are true which occurs in just 3 of these 128
rows
• in these 3 models ¬ P1,2 is true hence there is no pit in[1,2]
• A general algorithm for deciding entailment in propositional logic is as
follows
• TT_ENTAILS? performs a recursive enumeration of a finite space of
assignment to variables
• Algorithm is sound and complete
• Time complexity O(2^n)
• Space Complexity O(n)
Logical equivalence
• Two sentences are logically equivalent iff they are true in same
models: α ≡ ß iff α╞ β and β╞ α
Validity and satisfiability
A sentence is valid if it is true in all models,
e.g., True, A A, A A, (A (A B)) B
Validity is connected to inference via the Deduction Theorem:
KB ╞ α if and only if (KB α) is valid
A sentence is satisfiable if it is true in some model
e.g., A B, C
A sentence is unsatisfiable if it is true in no models
e.g., AA
Satisfiability is connected to inference via the following:
KB ╞ α if and only if (KB α) is unsatisfiable
Reasoning patterns in propositional logic
✓ Standard patterns of inference can be applied to derive chains of conclusions
that lead to the desired goal.
✓ These patterns of inference are called “Inference rules”
✓ The best-known rules are
✓ 1. Modus Ponens
✓ 2. And-Elimination
✓ Modus Pones:
✓ Whenever any sentences of the form and are given, then the
sentence can be inferred and is written as follows:
.
---------------
✓ Example: If ((wumpus ahead wumpus alive) shoot) and (wumpus
ahead wumpus alive) are given, then shoot can be inferred.
✓ And-Elimination:
✓ From a conjunction, any of the conjuncts can be inferred.
---------------
(or)
✓ Example: (wumpus ahead wumpus alive), wumpus alive can be inferred.
Reasoning patterns in propositional logic
✓ By consisting the possible truth values of and , one can show that modus
pones and And-Elimination are sound.
✓ All of the logical equivalence can be used as inferred rules.
✓ Let us see how these inference rules and equivalences can be used in the
wumpus world.
✓ Consider the Knowledge base includes the following sentences:
✓ R1 : ¬P1,1
✓ R2 : B1,1 ⇔ (P1,2 ∨ P2,1)
✓ R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
✓ R4 : ¬B1,1
✓ R5 : B2,1
✓ Consider the agent want to prove ¬P1,2 there is no pit in [1,2]
✓ Step 1: Apply biconditional elimination to R2 to obtain
✓ R6 : (B1,1 (P1,2 ∨ P2,1) ) ((P1,2 ∨ P2,1) B1,1)
------------
✓ Step 2: Apply And-Elimination to R6
✓ R7 : (P1,2 ∨ P2,1) B1,1
✓ Step 3: Apply contraposition to R7
✓ R8 : (¬B1,1 ¬ (P1,2 ∨ P2,1) )
Reasoning patterns in propositional logic
✓ Step 4: We can apply modus pones with R8 and R4 to obtain R9
✓ R8 : (¬B1,1 ¬ (P1,2 ∨ P2,1) )
.
✓ R4 : ¬B1,1
--------------
✓ R9 : ¬ (P1,2 ∨ P2,1) )
✓ Step 5: Apply De-morgan’s rules to R9, giving the conclusion
✓ R10 : ¬ P1,2 ¬ P2,1
✓ Step 6: Finally apply And-Elimination to R10, giving the conclusion
✓ ¬ P1,2 i.e. There is no pit in [1,2]
We found this proof by hand, but we can apply any of the search algorithms to find a
sequence of steps that constitutes a proof. We just need to define a proof problem as
follows:
• INITIAL STATE: the initial knowledge base.
• ACTIONS: the set of actions consists of all the inference rules applied to all the
sentences that match the top half of the inference rule.
• RESULT: the result of an action is to add the sentence in the bottom half of the
inference rule.
• GOAL: the goal is a state that contains the sentence we are trying to prove.
Proof
✓ a sequence of applications of inference rules is called proof
✓ finding proofs is exactly like finding solutions to search problems
✓ searching for proofs is an alternative to enumerating models
✓ there are two types of searches:
✓ Forward chaining
✓ the search can go forward from the initial KB, applying
inference rules to derive the goal sentence
✓ Backward chaining
✓ the search can go backward from the goal sentence, trying to
find a chain of inference rules leading from the initial KB
• In many practical cases, finding a proof can be highly efficient simply
because it can ignore irrelevant propositions, no matter how many of
them there are
• Ex: the proof given to ¬ P1,2 ¬ P2,1 doesn't mention propositions
B2,1, P1,1, P2,2 or P3,1
• these propositions can be ignored because the goal proposition P1,2
appears only in sentence R2, the other propositions in R2 appears only
in R4 an R2.
• therefor R1,R3 and R5 have no presence on the proof.
• even if we add a million more sentences to the KB, it doesn't affect the
earlier inferences this property is called monotonicity
Proof by Resolution
✓ The inference rules covered so far are sound.
✓ Inference algorithm may not complete, if the available inference
rules are inadequate, then the goal is not reachable.
✓ Eg: If we removed the biconditional elimination rule, the proof in
the preceding section would not go through.
✓ Resolution is a single inference rule, that yields a complete
inference algorithm.
✓ We begin by using a simple version of the resolution rule in the
wumpus world.
✓ Consider the situation, the agent returns from [2,1] to [1,1] and
then goes to [1,2] where the agent perceives a stench, but no
breeze.
✓ We add the following sentences to the KB
• R11 : ¬B1,2 .
• R12 : B1,2 ⇔ (P1,1 ∨ P2,2 ∨ P1,3) .
• By the same process that led to R10 earlier, we can now derive the
absence of pits in [2,2] and [1,3] (remember that [1,1] is already known to
be pitless):
• R13 : ¬P2,2 .
• R14 : ¬P1,3 .
• We can also apply biconditional elimination to R3, followed by Modus
Ponens with R5, to obtain the fact that there is a pit in [1,1], [2,2], or [3,1]:
• R15 : P1,1 ∨ P2,2 ∨ P3,1 .
• Now comes the first application of the resolution rule: the literal ¬P2,2
in R13 resolves with the literal P2,2 in R15 to give the resolvent
• R16 : P1,1 ∨ P3,1 .
• In English; if there’s a pit in one of [1,1], [2,2], and [3,1] and it’s not in
[2,2], then it’s in [1,1] or [3,1].
• Similarly, the literal ¬P1,1 in R1 resolves with the literal P1,1 in R16 to
give
• R17 : P3,1 .
• In English: if there’s a pit in [1,1] or [3,1] and it’s not in [1,1], then it’s
in [3,1]. These last two inference steps are examples of the unit
resolution inference rule,
• where each l is a literal and i and m are complementary literals
• Clause is a disjunction of literals, unit clause is disjunction of one literal
• 1
• resolution takes two clauses and produces a new clause containing all
literals of the two original clauses except the two complementary
literals
• Ex:
• The resulting clause should contain only one copy of each literal
• the removal of multiple copies of literals is called factoring
Conjunctive normal form
• Resolution rule applies only to clauses, it seems relevant that KB and
queries consisting only of clauses.
• A sentence expressed as a conjunction of clauses is said to be in
Conjunctive normal form(CNF)
Conversion to CNF
B1,1 (P1,2 P2,1)
1. Eliminate , replacing α β with (α β)(β α).
(B1,1 (P1,2 P2,1)) ((P1,2 P2,1) B1,1)
2. Eliminate , replacing α β with α β.
(B1,1 P1,2 P2,1) ((P1,2 P2,1) B1,1)
3. Move inwards using de Morgan's rules and double-negation:
(B1,1 P1,2 P2,1) ((P1,2 P2,1) B1,1)
4. Apply distributivity law ( over ) and flatten:
(B1,1 P1,2 P2,1) (P1,2 B1,1) (P2,1 B1,1)
Resolution Algorithm
• inference procedure based on resolution work by using the principle of
proof by contradiction
• i.e., to show that KB|= we show that (KB^ ¬ ) is unsatisfiable this
is known as proof by contradiction
• The function PL-RESOLVE returns the set of all possible clauses obtained by resolving its
two inputs.
• Partial application of PL-RESOLUTION to a simple inference in the
Wumpus world. ¬P1,2 is shown to follow from the first four
clauses in the top row.
• real world knowledge base often contain only clauses of a restricted
form
• One such restricted form is the definite clause, which is a disjunction of
literals, of which exactly one is positive.
• For example, the clause (¬L1,1 ∨¬Breeze ∨B1,1) is a definite
clause, whereas (¬B1,1 ∨ P1,2 ∨ P2,1) is not.
• Another one is horn clause ,a disjunction of literals of which at most
one is positive.
• A horn clause with no positive literals are called goal clause
• Knowledge bases containing only definite clauses are interesting for
three reasons:
1. Every definite clause can be written as an implication whose premise is
a conjunction of positive literals and whose conclusion is a single positive
literal.
• For example, the definite clause (¬L1,1 ∨ ¬Breeze ∨ B1,1) can be
written as the implication (L1,1 ∧ Breeze) ⇒ B1,1.
• In the implication form, the sentence is easier to understand: it says that
if the agent is in [1,1] and there is a breeze, then [1,1] is breezy.
• In Horn form, the premise is called the body and the conclusion is
called the head.
• A sentence consisting of a single positive literal, such as L1,1, is called
a fact.
2. Inference with Horn clauses can be done through the forward-
chaining and backward chaining algorithms.
• Both of these algorithms are natural, in that the inference steps are
obvious and easy for humans to follow.
3. Deciding entailment with Horn clauses can be done in time that is linear
in the size of the knowledge base
Forward chaining
• The forward chaining algorithm PLFC_ENTAILS(KB,q) determines
whether a single proposition symbol q the query is entailed by a KB of
horn clauses
• the inference begins from the known facts(i.e., positive literals) in the KB.
• if all the premises of an implication are known, then its conclusion is
added to the set of known facts.
• Ex. if L1,1 and Breeze are known and (L1,1^Breeze)=>B1,1 is in the KB,
then the conclusion B1,1 can be added.
• This process continues until the query q is added or until no further
inference can be made.
• To understand the forward chaining algorithm, consider a KB of horn
clauses with A and B as known facts.
• the knowledge base contains
• P ⇒ Q ; L ∧M ⇒ P; B ∧ L ⇒ M; A ∧ P ⇒ L; A ∧ B ⇒ L; A; B
• The corresponding AND–OR graph.
• In AND-OR graph multiple links joined
by an arc indicate a conjunction every link
must be proved.
• multiple links without an arc indicates a
disjunction any link can be proved.
ex: P ⇒ Q L ∧M ⇒ P
• it is easy to see how forward chaining works in the graph
• the known leaves (A and B) are set. inference propagates up the graph
as far as possible.
• whenever a conjunction appears the propagation waits until all the
conjuncts are known before proceeding.
• the variable count keeps track of how many premises of each
implication are yet unknown.
• whenever a new symbol p is processed the count is reduced by one for
each implication in which premise p appears
• If count=0 all the premises of the implication are known so its
conclusion can be added to the agenda.
• agenda keeps track of symbols known to be true but not yet processed
• Forward chaining is an example of the general concept of data-driven
reasoning
• Forward chaining
Backward-chaining algorithm
• The backward-chaining algorithm works backward from the query.
• If the query q is known to be true, then no work is needed.
• Otherwise, the algorithm finds those implications in the knowledge base
whose conclusion is q.
• If all the premises of one of those implications can be proved true (by
backward chaining), then q is true.
• When applied to the query Q in Figure, it works back down the graph
until it reaches a set of known facts, A and B, that forms the basis for a
proof.
• As with forward chaining, an efficient implementation runs in linear time.
• Backward chaining is a form of goal-directed reasoning.
Proposed by Devis, logman and
lovelace
components are the clauses that share no
unassigned variables
Local Search algorithm
Agents based on propositional logic