Chapter 4
Chapter 4
Chapter 4
Knowledge and Reasoning
Contents
• Knowledge based Agents
• WUMPUS World Example
• Brief Overview of propositional logic
• First Order Logic: Syntax and Semantic
• Inference in FOL
• Forward chaining, backward Chaining
• Knowledge Engineering in First-Order Logic
• Unification
• Resolution
• Uncertain Knowledge and Reasoning: Uncertainty
• Representing knowledge in an uncertain domain
• The semantics of belief network
• Simple Inference in belief network
Knowledge based Agents
1. Understanding Knowledge Representation
Knowledge and Representation are distinct yet interconnected concepts in intelligent systems.
• Knowledge: It describes the world and determines the system’s competence based on what it knows.
• Representation: It refers to how knowledge is encoded, which impacts how effectively a system can utilize knowledge to
perform tasks.
2. Types of Knowledge
There are two primary types of knowledge:
1. Procedural Knowledge ("Knowing How")
1. This is knowledge of how to perform a task.
2. Example: Knowing how to drive a car.
3. Represented as rules, sequences, or step-by-step instructions in AI systems.
2. Declarative Knowledge ("Knowing That")
1. This refers to factual knowledge about things.
2. Example: Knowing that the speed limit on a motorway is 70 mph.
3. Represented as facts and relationships in AI systems.
• There are also some components which can help the agent to navigate the cave. These components
are given as follows:
1. The rooms adjacent to the Wumpus room are smelly, so that it would have some stench.
2. The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will perceive the breeze.
3. There will be glitter in the room if and only if the room has gold.
4. The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will emit a horrible scream which can be heard anywhere in the cave.
PEAS description of Wumpus world:
Performance measure: Actuators:
+1000 reward points if the agent comes out of the cave with •Left turn,
the gold. •Right turn
-1000 points penalty for being eaten by the Wumpus or •Move forward
falling into the pit. •Grab
-1 for each action, and -10 for using an arrow. •Release
The game ends if either agent dies or came out of the cave. •Shoot.
Environment: Sensors:
A 4*4 grid of rooms. •The agent will perceive the stench if he is in the room adjacent to the
The agent initially in room square [1, 1], facing toward the Wumpus. (Not diagonally).
right. •The agent will perceive breeze if he is in the room directly adjacent to the
•Location of Wumpus and gold are chosen randomly except Pit.
the first square [1,1]. •The agent will perceive the glitter in the room where the gold is present.
•Each square of the cave can be a pit with probability 0.2 •The agent will perceive the bump if he walks into a wall.
except the first square. •When the Wumpus is shot, it emits a horrible scream which can be perceived
anywhere in the cave.
Tot square=15 •These percepts can be represented as five element list, in which we will have
Tot pits=3 different indicators for each sensor.
15/3=5, so for every 5 square there are chances that 1 can •Example if agent perceives stench, breeze, but no glitter, no bump, and no
be pit and rest 4 are safe. Hence probability is 1/5=0.2 scream then it can be represented as:
[Stench, Breeze, None, None, None].
Exploring Wumpus World
• Now we will explore the Wumpus world and will determine how the agent will find its goal by applying logical reasoning.
• Agent's First step:
• Initially, the agent is in the first room or on the square [1,1], and we already know that this room is safe for the agent, so to
represent on the below diagram (a) that room is safe we will add symbol OK. Symbol A is used to represent agent, symbol
B for the breeze, G for Glitter or gold, V for the visited room, P for pits, W for Wumpus.
• At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent squares are also OK
Agent's second Step:
• Now agent needs to move forward, so it will either move to [1, 2], or [2,1]. Let's suppose agent moves to the
room [2, 1], at this room agent perceives some breeze which means Pit is around this room. The pit can be in
[3, 1], or [2,2], so we will add symbol P? to say that, is this Pit room?
• Now agent will stop and think and will not make any harmful move. The agent will go back to the [1, 1] room.
The room [1,1], and [2,1] are visited by the agent, so we will use symbol V to represent the visited squares.
• Given the truth values of all of the constituent symbols in a sentence, that sentence can be
"evaluated" to determine its truth value (True or False). This is called an interpretation of the
sentence.
• A model is an interpretation (i.e., an assignment of truth values to symbols) of a set of sentences
such that each sentence is True. A model is just a formal mathematical structure that "stands in" for
the world.
• A valid sentence (also called a tautology) is a sentence that is True under all interpretations. Hence,
no matter what the world is actually like or what the semantics is, the sentence is True. For example
"It's raining or it's not raining."
• An inconsistent sentence (also called unsatisfiable or a contradiction) is a sentence that is False
under all interpretations. Hence the world is never like what it describes. For example, "It's raining
and it's not raining."
• Propositional logic (PL) is the simplest form of logic where all the statements are made by propositions. A
proposition is a declarative statement which is either true or false. It is a technique of knowledge
representation in logical and mathematical form
• Logical connectives are used to connect two simpler propositions or representing a sentence logically. We can
create compound propositions with the help of logical connectives. There are mainly five connectives, which
are given as follows:
1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or negative
literal.
2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. P∧ Q.
3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P and Q are the
propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Engineer, so we can write it as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an implication. Implications are also known as if-then rules. It
can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence,
Example: If I am breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.
Logical Equivalence
• Logical equivalence is one of the features of propositional logic. Two
propositions are said to be logically equivalent if and only if the columns
in the truth table are identical to each other.
• Let's take two propositions A and B, so for logical equivalence, we can
write it as A⇔B. In below truth table we can see that column for ¬A∨ B
and A→B, are identical hence A is Equivalent to B
Properties of Operators
• Commutativity:
• P∧ Q= Q ∧ P, or
• P ∨ Q = Q ∨ P.
• Associativity:
• (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
• (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
• Identity element:
• P ∧ True = P,
• P ∨ True= True.
• Distributive:
• P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
• P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
• DE Morgan's Law:
• ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
• ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
• Double-negation elimination:
• ¬ (¬P) = P.
Examples:
Q.1) consider following set of facts.
I. Rani is hungry.
II. If rani is hungry she barks.
III. If rani is barking then raja is angry.
Convert into proposition logic statements.
Solution:
step1: we can use following propositional symbols
P: Rani is hungry
Q: Rani is Barking
R: Raja is Angry.
Step2: The propositional logic statements are,
I. P
II. P=>Q
III. Q=>R
Following statements are not
propositions-
• Close the door. (Command)
• Do you speak French? (Question)
• What a beautiful picture! (Exclamation)
• I always tell lie. (Inconsistent)
• P(x) : x + 3 = 5 (Predicate)
Identify which of the following statements are propositions-
Hence from the above truth table, we can prove that P → Q is equivalent to
¬ Q → ¬ P, and Q→ P is equivalent to ¬ P → ¬ Q.
1. Modus Ponens:
The Modus Ponens rule is one of the most important rules of inference, and it states that if P
and P → Q is true, then we can infer that Q will be true. It can be represented as:
Example:
Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be true.
The Hypothetical Syllogism rule state that if P→R is true whenever P→Q is
true, and Q→R is true. It can be represented as the following notation:
Example:
Statement-1: If you have my home key then you can unlock my home. P→Q
Statement-2: If you can unlock my home then you can take my
money. Q→R
Conclusion: If you have my home key then you can take my money. P→R
Proof by truth table:
4. Disjunctive Syllogism:
• The Disjunctive syllogism rule state that if P ∨Q is true, and ¬P is true, then
Q will be true. It can be represented as:
• Example:
• Statement-1: Today is Sunday or Monday. ==>P∨Q
Statement-2: Today is not Sunday. ==> ¬P
Conclusion: Today is Monday. ==> Q
• Proof by truth-table:
5. Addition:
The Addition rule is one the common inference rule, and it states that If P is true, then P ∨Q
will be true.
Example:
Statement: I have a vanilla ice-cream. ==> P
Statement-2: I have Chocolate ice-cream. => Q
Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)
Proof by Truth-Table:
6. Simplification:
The simplification rule state that if P∧ Q is true, then Q or P will also be true. It can be
represented as:
Proof by Truth-Table:
7. Resolution:
The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q ∨R will also be
true. It can be represented as
Proof by Truth-Table:
Knowledge Engineering in First-Order Logic
What is knowledge-engineering?
• The process of constructing a knowledge-base in first-order logic is called as knowledge- engineering.
In knowledge-engineering, someone who investigates a particular domain, learns important concept of
that domain, and generates a formal representation of the objects, is known as knowledge engineer.
The knowledge-engineering process:
1. Identify the task
2. Assemble the relevant knowledge
3. Decide on vocabulary
4. Encode general knowledge about the domain
5. Encode a description of the problem instance
6. Pose queries to the inference procedure and get answers
7. Debug the knowledge base
First Order Logic: Syntax and Semantic
• User defines these primitives:
Constant symbols (i.e., the "individuals" in the world)
E.g., Mary, 3
Function symbols (mapping individuals to individuals)
E.g., father-of(Mary) = John, color-of(Sky) = Blue
Predicate symbols (mapping from individuals to truth values)
E.g., greater(5,3), green(Grass), color(Grass, Green)
• Existential quantification distributes over disjunction ("or") in that (ⱻx)P(x) means that P holds for some
value of x in the domain associated with that variable.
E.g., (ⱻx) mammal(x) ^ lays-eggs(x)
• Existential quantifiers usually used with "and" to specify a list of properties or facts about an individual.
E.g., (ⱻx) IT-student(x) ^ smart(x) means "there is a IT student who is smart."
A common mistake is to represent this English sentence as the FOL sentence:
(ⱻx) IT-student(x) => smart(x) But consider what happens when there is a person who is NOT a IT-student.
Examples
1. All birds fly.
- In this question the predicate is "fly(bird)."
- And since there are all birds who fly so it will be represented as follows.
∀x bird(x) →fly(x).
ⱻ
• ( x):(ⱻy): mushroom(x) ^ purple(x) ^ mushroom(y) ^ purple(y) ^ ~(x=y) ^ ( ⩝ z) (mushroom(z) ^
purple(z)) => ((x=z) v (y=z))
∃x ∃y: There exist two mushrooms x and y
Mushroom(x) ∧ Purple(x) ∧ Mushroom(y) ∧ Purple(y): Both x and y are purple mushrooms
x ≠ y: They are distinct
∀z (Mushroom(z) ∧ Purple(z) → (z = x ∨ z = y)): Any purple mushroom z must be either x or y (no third
one)
• It takes two literals as input and makes them identical using substitution.
• Ψ1 = King(John), Ψ2 = King(John),
Implementation of the Algorithm
• Step.1: Initialize the substitution set to be empty.
2. If one expression is a variable vi, and the other is a term ti which does not contain
variable vi, then:
1. Substitute ti / vi in the existing substitutions
2. Add ti /vi to the substitution setlist.
3. If both the expressions are functions, then function name must be similar, and the number
of arguments must be the same in both the expression.
Examples
1. Find the MGU of {p(f(a), g(Y)) and p(X, X)}
Sol: S0 => Here, Ψ1 = p(f(a), g(Y)), and Ψ2 = p(X, X)
SUBST θ= {f(a) / X}
S1 => Ψ1 = p(f(a), g(Y)), and Ψ2 = p(f(a), f(a))
SUBST θ= {f(a) / g(y)}, Unification failed.
2. Find the MGU of {p(b, X, f(g(Z))) and p(Z, f(Y), f(Y))}
SUBST θ= {11/y}
Unifier: {11/y}.
4. Find the MGU of {p (X, X), and p (Z, f(Z))}
SUBST θ= {Z/X}
SUBST θ= {f(Z) / Z}
e.g.( P ˅ Q ˅ R ) ˄ (P ˅ Q ) ˄ (P ˅ R ) ˄ P
e.g. ( P ˄ Q ˄ R ) ˅ (P ˄ Q ) ˅ (P ˄ R ) ˅ P
Solution:
Step 1: ( ( P->Q )->R ) ( ( ךP ˅ Q)->R)
==> ך ( ךP ˅ Q) ˅
==> R
Step 2: ך ( ךP ˅ Q) ˅ R ==> (P ˄ ךQ ) ˅ R
CNF
Steps for Resolution:
1.Conversion of facts into first-order logic.
2.Convert FOL statements into CNF
3.Negate the statement which needs to prove (proof by contradiction)
4.Draw resolution graph (unification).
• 1) Eliminate implication ‘=>’
• a=>b = ~a V b
• (a˄b)=>c =~(a˄b) ˅ c
• 2) Eliminate ‘˄’
• a˄b=a
b
• a ˄ b˄ ~c=a
b
~c
• a˄(b ˅ c) = a
b˅c
• aV(b˄c) = aVb
aVc
3) Eliminate ‘ ⱻ’
• We can eliminate ⱻ by substituting for the variable a reference to a function that
produces a desired value.
• E.g. There is a teacher
• ⱻ x: Teacher(x) ,then by eliminating ‘ⱻ’ we get,
• “Teacher (s1)” where s1 is a function that somehow produces a value that satisfies the
predicate teacher.
4) Eliminate ‘⩝’
• To eliminate ‘⩝’ , convert the fact into prefix normal form in which all the universal
quantifiers are at the beginning of formula.
• Eg. All students are intelligent.
• ⩝ x: Student(x)->Intelligent(x)
• After eliminating ⩝ x we get,
• Student(x)->Intelligent(x).
Q.1) consider following set of facts.
I. Rani is hungry.
II. If rani is hungry she barks.
III. If rani is barking then raja is angry.
Convert into proposition logic statements.
And prove that ‘Raja is Angry’ by resolution
Sol:
step1: we can use following propositional symbols
P: Rani is hungry
Q: Rani is Barking
R: Raja is Angry.
Step2: The propositional logic statements are,
I. P
II. P=>Q
III. Q=>R
Thus ,we get empty string and we can conclude that ‘Raja is angry’
Example
a. John likes all kind of food.
b. Apple and vegetable are food
c. Anything anyone eats and not killed is food.
d. Anil eats peanuts and still alive
e. Harry eats everything that Anil eats.
• Inference system generates new facts so that an agent can update the KB. An
inference system works mainly in two rules which are given as:
• Forward chaining
• Backward chaining
Forward chaining, backward Chaining
• Forward Chaining
• Forward chaining is a data driven method of deriving a particular goal from
a given knowledge base and set of inference rules
• It is a down-up approach, as it moves from bottom to top.
• It is a process of making a conclusion based on known facts or data, by
starting from the initial state and reaches the goal state.
• The application of inference rules results in new knowledge which is then
added to the knowledge base.
• Inference rules are successively applied to elements of the knowledge base
until the goal is reached
• Example:
• Knowledge Base:
– If [X croaks and eats flies] Then [X is a frog]
– If [X chirps and sings] Then [X is a canary]
– If [X is a frog] Then [X is colored green]
– If [X is a canary] Then [X is colored yellow]
– [Fritz croaks and eats flies]
• Goal:
– [Fritz is colored Y]?
• Solution:
Step 1:
• Step 2: New rule “Fritz is a frog” is added to knowledge base
• Step 3: Again applying inference rule between “If [X is a frog] Then [X
is colored green]” and “[Fritz is a frog]”.
• Step 4: New rule is added to knowledge base. Every resulted rule
must compare with goal.
• Backward Chaining
• Backward chaining is a goal driven method of deriving a particular goal
from a given knowledge base and set of inference rules
• It is known as a top-down approach.
• In backward chaining, the goal is broken into sub-goal or sub-goals to
prove the facts true.
• It is called a goal-driven approach, as a list of goals decides which rules
are selected and used.
• When such a relation is found, the antecedent of the relation is added to
the list of goals (and not into the knowledge base, as is done in forward
chaining)
• Search proceeds in this manner until a goal can be matched against a fact
in the knowledge base
• Example: Step 1: Apply inference rule between knowledge and goal.
• Step 2: Unlike forward chaining ,new rule get added to goal.
• Step 3: final step
Example
"As per the law, it is a crime for an American to sell weapons to hostile
nations. Country A, an enemy of America, has some missiles, and all the
missiles were sold to it by Robert, who is an American citizen."
{T1 / q} {A / r}
{A / P}
{T1 / P} {T1 / P}
Backward Chaining
a. American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p)
b. Owns(A, T1)
c. Missile(T1)
d. ∀ p (Missiles(p) ∧ Owns (A, p)) → Sells (Robert, p, A)
e. Missile(p) → Weapons (p)
f. Enemy(p, America) →Hostile(p)
g. Enemy (A, America)
h. American(Robert).
{A / r}
{T1 / q}
S. Forward Chaining Backward Chaining
No.
1. Forward chaining starts from known Backward chaining starts from the goal and
facts and applies inference rule to works backward through inference rules to find
extract more data until it reaches to the the required facts that support the goal.
goal.
2. It is a bottom-up approach It is a top-down approach
3. Forward chaining is known as data- Backward chaining is known as goal-driven
driven inference technique as we reach technique as we start from the goal and divide
to the goal using the available data. into sub-goal to extract the facts.
4. Forward chaining reasoning applies a Backward chaining reasoning applies a depth-
breadth-first search strategy. first search strategy.
5. Forward chaining is suitable for the Backward chaining is suitable for diagnostic,
planning, monitoring, control, and prescription, and debugging application.
interpretation application.
6. It operates in the forward direction. It operates in the backward direction.
Example
• The law says that it is a crime for an American to sell weapons to
hostile nations. The country Nono, an enemy of America, has some
missiles, and all of its missiles were sold to it by Colonel West, who is
American.
( )
Uncertain Knowledge and Reasoning:
Uncertainty
• Uncertainty
Many times in complex world theory of agent and events in environment are contradicted to each other, and
this result in reduction of performance measure. E.g. “let agent’s job is to leave the passenger on time, before
the flight departs. But agent knows the problems it can face during journey. Means Traffic, flat tire, or accident.
In these cases agent cannot give its full performance. ”This is called as uncertainty.
• Probability:
• Objective probability
-Averages over repeated experiments of random events
• o E.g. estimate P (Rain) from historical observation
-Makes assertions about future experiments
-New evidence changes the reference class
where P(D|T) is the posterior probability of Diagnosis D given Test result T, P(T|D) is the conditional
probability of T given D, P(D) is the prior probability of D, P(T|D') is the conditional probability of T given
not D, and P(D') is the probability of not D'.
Simple Inference in belief network
• Example 1
• Suppose that there are two events which could cause grass to be wet: either the sprinkler is on or it's
raining. Also, suppose that the rain has a direct effect on the use of the sprinkler (namely that when it
rains, the sprinkler is usually not turned on). Then the situation can be modeled with a Bayesian
network (shown). All three variables have two possible values, T (for true) and F (for false).
• The joint probability function is:
• P(G,S,R) = P(G | S,R)P(S | R)P(R)
• where the names of the variables have been abbreviated to G = Grass wet, S = Sprinkler, and R = Rain.
• The model can answer questions like "What is the probability that it is raining, given the grass is wet?"
by using the conditional probability formula and summing over all nuisance variables:
• Example 2:
• Assume your house has an alarm system against burglary. You live in the seismically active area and the
alarm system can get occasionally set off by an earthquake. You have two neighbors, Mary and John,
who do not know each other. If they hear the alarm they call you, but this is not guaranteed.
• We want to represent the probability distribution of events: – Burglary, Earthquake, Alarm, Mary calls
and John calls
1. Directed acyclic graph
• Nodes = random variables Burglary, Earthquake, Alarm, Mary calls and John calls
• Links = direct (causal) dependencies between variables. The chance of Alarm is influenced by Earthquake, The
chance of John calling is affected by the Alarm