Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
10 views101 pages

Chapter 4

Chapter 4 of the document discusses knowledge representation and reasoning in AI, focusing on knowledge-based agents and the Wumpus World example. It covers various types of knowledge, propositional logic, and inference methods in first-order logic, along with the semantics of belief networks. The chapter emphasizes the importance of knowledge representation in intelligent systems and provides detailed examples and logical structures to illustrate these concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views101 pages

Chapter 4

Chapter 4 of the document discusses knowledge representation and reasoning in AI, focusing on knowledge-based agents and the Wumpus World example. It covers various types of knowledge, propositional logic, and inference methods in first-order logic, along with the semantics of belief networks. The chapter emphasizes the importance of knowledge representation in intelligent systems and provides detailed examples and logical structures to illustrate these concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 101

SIES, Graduate School of Technology, Nerul

Chapter 4
Knowledge and Reasoning
Contents
• Knowledge based Agents
• WUMPUS World Example
• Brief Overview of propositional logic
• First Order Logic: Syntax and Semantic
• Inference in FOL
• Forward chaining, backward Chaining
• Knowledge Engineering in First-Order Logic
• Unification
• Resolution
• Uncertain Knowledge and Reasoning: Uncertainty
• Representing knowledge in an uncertain domain
• The semantics of belief network
• Simple Inference in belief network
Knowledge based Agents
1. Understanding Knowledge Representation
Knowledge and Representation are distinct yet interconnected concepts in intelligent systems.
• Knowledge: It describes the world and determines the system’s competence based on what it knows.
• Representation: It refers to how knowledge is encoded, which impacts how effectively a system can utilize knowledge to
perform tasks.
2. Types of Knowledge
There are two primary types of knowledge:
1. Procedural Knowledge ("Knowing How")
1. This is knowledge of how to perform a task.
2. Example: Knowing how to drive a car.
3. Represented as rules, sequences, or step-by-step instructions in AI systems.
2. Declarative Knowledge ("Knowing That")
1. This refers to factual knowledge about things.
2. Example: Knowing that the speed limit on a motorway is 70 mph.
3. Represented as facts and relationships in AI systems.

• The Knowledge Representation models/mechanisms are often based on:


• ◊ Logic ◊ Rules
• ◊ Frames ◊ Semantic Net
• Different types of knowledge require different kinds of reasoning.
Generalized architecture for a knowledge-
based agent
The Wumpus World in Artificial intelligence
• Wumpus world:
• The Wumpus world is a simple world example to illustrate the worth of a knowledge-based agent and to
represent knowledge representation. It was inspired by a video game Hunt the Wumpus by Gregory Yob in
1973.
• The Wumpus world is a cave which has 4*4 rooms connected with passageways.
• So there are total 16 rooms which are connected with each other.
• We have a knowledge-based agent who will go forward in this world.
• The cave has a room with a beast which is called Wumpus, who eats anyone who enters the room.
• The Wumpus can be shot by the agent, but the agent has a single arrow.
• In the Wumpus world, there are some Pits rooms which are bottomless, and if agent falls in Pits, then he
will be stuck there forever.
• The exciting thing with this cave is that in one room there is a possibility of finding a heap of gold.
• So the agent goal is to find the gold and climb out the cave without fallen into Pits or eaten by Wumpus.
• The agent will get a reward if he comes out with gold, and he will get a penalty if eaten by Wumpus or falls
in the pit.
• Following is a sample diagram for representing the Wumpus world. It is showing some rooms with
Pits, one room with Wumpus and one agent at (1, 1) square location of the world.

• There are also some components which can help the agent to navigate the cave. These components
are given as follows:
1. The rooms adjacent to the Wumpus room are smelly, so that it would have some stench.
2. The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will perceive the breeze.
3. There will be glitter in the room if and only if the room has gold.
4. The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will emit a horrible scream which can be heard anywhere in the cave.
PEAS description of Wumpus world:
Performance measure: Actuators:
+1000 reward points if the agent comes out of the cave with •Left turn,
the gold. •Right turn
-1000 points penalty for being eaten by the Wumpus or •Move forward
falling into the pit. •Grab
-1 for each action, and -10 for using an arrow. •Release
The game ends if either agent dies or came out of the cave. •Shoot.

Environment: Sensors:
A 4*4 grid of rooms. •The agent will perceive the stench if he is in the room adjacent to the
The agent initially in room square [1, 1], facing toward the Wumpus. (Not diagonally).
right. •The agent will perceive breeze if he is in the room directly adjacent to the
•Location of Wumpus and gold are chosen randomly except Pit.
the first square [1,1]. •The agent will perceive the glitter in the room where the gold is present.
•Each square of the cave can be a pit with probability 0.2 •The agent will perceive the bump if he walks into a wall.
except the first square. •When the Wumpus is shot, it emits a horrible scream which can be perceived
anywhere in the cave.
Tot square=15 •These percepts can be represented as five element list, in which we will have
Tot pits=3 different indicators for each sensor.
15/3=5, so for every 5 square there are chances that 1 can •Example if agent perceives stench, breeze, but no glitter, no bump, and no
be pit and rest 4 are safe. Hence probability is 1/5=0.2 scream then it can be represented as:
[Stench, Breeze, None, None, None].
Exploring Wumpus World
• Now we will explore the Wumpus world and will determine how the agent will find its goal by applying logical reasoning.
• Agent's First step:
• Initially, the agent is in the first room or on the square [1,1], and we already know that this room is safe for the agent, so to
represent on the below diagram (a) that room is safe we will add symbol OK. Symbol A is used to represent agent, symbol
B for the breeze, G for Glitter or gold, V for the visited room, P for pits, W for Wumpus.
• At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent squares are also OK
Agent's second Step:
• Now agent needs to move forward, so it will either move to [1, 2], or [2,1]. Let's suppose agent moves to the
room [2, 1], at this room agent perceives some breeze which means Pit is around this room. The pit can be in
[3, 1], or [2,2], so we will add symbol P? to say that, is this Pit room?
• Now agent will stop and think and will not make any harmful move. The agent will go back to the [1, 1] room.
The room [1,1], and [2,1] are visited by the agent, so we will use symbol V to represent the visited squares.

Agent's third step:


At the third step, now agent will move to the room [1,2] which is OK. In the room [1,2] agent perceives a stench
which means there must be a Wumpus nearby. But Wumpus cannot be in the room [1,1] as by rules of the game,
and also not in [2,2] (Agent had not detected any stench when he was at [2,1]). Therefore agent infers that
Wumpus is in the room [1,3], and in current state, there is no breeze which means in [2,2] there is no Pit and no
Wumpus. So it is safe, and we will mark it OK, and the agent moves further in [2,2].

Agent's fourth step:


At room [2,2], here no stench and no breezes present so let's suppose agent decides to move to [2,3]. At room
[2,3] agent perceives glitter, so it should grab the gold and climb out of the cave.
Propositional Logic in AI
• Propositional logic (PL) is the simplest form of logic where all the
statements are made by propositions.
• A proposition is a declarative statement which is either true or false.
• It is a technique of knowledge representation in logical and
mathematical form.
Brief Overview of propositional logic
• A simple language that is useful for showing key ideas and definitions
• User defines a set of propositional symbols, like P and Q. User define the semantics
of each of these symbols. For example,
P means "It is hot"
Q means "It is humid"
R means "It is raining"

• A sentence (also called a formula or well-formed formula(wff)) is defined as:


1. A symbol
2. If S is a sentence, then ~S is a sentence, where "~" is the "not" logical operator
3. If S and T are sentences, then (S v T), (S ^ T), (S => T), and (S <=> T) are sentences,
where the four logical connectives correspond to "or," "and" "implies," and "if and only
if," respectively
• Examples of PL sentences:
(P ^ Q) => R (here meaning "If it is hot and humid, then it is raining")
Q => P (here meaning "If it is humid, then it is hot")
Q (meaning "It is humid.")

• Given the truth values of all of the constituent symbols in a sentence, that sentence can be
"evaluated" to determine its truth value (True or False). This is called an interpretation of the
sentence.
• A model is an interpretation (i.e., an assignment of truth values to symbols) of a set of sentences
such that each sentence is True. A model is just a formal mathematical structure that "stands in" for
the world.
• A valid sentence (also called a tautology) is a sentence that is True under all interpretations. Hence,
no matter what the world is actually like or what the semantics is, the sentence is True. For example
"It's raining or it's not raining."
• An inconsistent sentence (also called unsatisfiable or a contradiction) is a sentence that is False
under all interpretations. Hence the world is never like what it describes. For example, "It's raining
and it's not raining."
• Propositional logic (PL) is the simplest form of logic where all the statements are made by propositions. A
proposition is a declarative statement which is either true or false. It is a technique of knowledge
representation in logical and mathematical form

• Following are some basic facts about propositional logic:


• Propositional logic is also called Boolean logic as it works on 0 and 1.
• In propositional logic, we use symbolic variables to represent the logic, and we can use any symbol for a
representing a proposition, such A, B, C, P, Q, R, etc.
• Propositions can be either true or false, but it cannot be both.
• Propositional logic consists of an object, relations or function, and logical connectives.
• These connectives are also called logical operators.
• The propositions and connectives are the basic elements of the propositional logic.
• Connectives can be said as a logical operator which connects two sentences.
• A proposition formula which is always true is called tautology, and it is also called a valid sentence.
• A proposition formula which is always false is called Contradiction.
• A proposition formula which has both true and false values is Invalid
• Statements which are questions, commands, or opinions are not propositions such as "Where is Ram",
• The syntax of propositional logic defines the allowable sentences for the
knowledge representation. There are two types of Propositions:
1.Atomic Propositions
2.Compound propositions
• Atomic Proposition: Atomic propositions are the simple propositions. It consists
of a single proposition symbol. These are the sentences which must be either true
or false.
Ex: a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.

• Compound proposition: Compound propositions are constructed by combining


simpler or atomic propositions, using parenthesis and logical connectives.
Ex: a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."
Logical Connectives

• Logical connectives are used to connect two simpler propositions or representing a sentence logically. We can
create compound propositions with the help of logical connectives. There are mainly five connectives, which
are given as follows:
1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or negative
literal.
2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. P∧ Q.
3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P and Q are the
propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Engineer, so we can write it as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an implication. Implications are also known as if-then rules. It
can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence,
Example: If I am breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.
Logical Equivalence
• Logical equivalence is one of the features of propositional logic. Two
propositions are said to be logically equivalent if and only if the columns
in the truth table are identical to each other.
• Let's take two propositions A and B, so for logical equivalence, we can
write it as A⇔B. In below truth table we can see that column for ¬A∨ B
and A→B, are identical hence A is Equivalent to B
Properties of Operators

• Commutativity:
• P∧ Q= Q ∧ P, or
• P ∨ Q = Q ∨ P.
• Associativity:
• (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
• (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
• Identity element:
• P ∧ True = P,
• P ∨ True= True.
• Distributive:
• P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
• P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
• DE Morgan's Law:
• ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
• ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
• Double-negation elimination:
• ¬ (¬P) = P.
Examples:
Q.1) consider following set of facts.
I. Rani is hungry.
II. If rani is hungry she barks.
III. If rani is barking then raja is angry.
Convert into proposition logic statements.
Solution:
step1: we can use following propositional symbols
P: Rani is hungry
Q: Rani is Barking
R: Raja is Angry.
Step2: The propositional logic statements are,
I. P
II. P=>Q
III. Q=>R
Following statements are not
propositions-
• Close the door. (Command)
• Do you speak French? (Question)
• What a beautiful picture! (Exclamation)
• I always tell lie. (Inconsistent)
• P(x) : x + 3 = 5 (Predicate)
Identify which of the following statements are propositions-

11. Mumbai is in India.


1.France is a country.
2.2020 will be a leap year. 12. I always tell truth.
3.Sun rises in the west. 13. I always tell lie.
4.P(x) : x + 6 = 7 14. Do not go there.
5.P(5) : 5 + 6 = 2 15. This sentence is true.
6.Apples are oranges. 16. This sentence is false.
7.Grapes are black. 17. It will rain tomorrow.
8.Two and two makes 4. 18. Fan is rotating.
9.x > 10 19. It will rain tomorrow.
10.Open the door. 20. Fan is rotating.
✅ Propositions:
1. France is a country. → ✅ True (Proposition) ❌ Not Propositions:
2. 2020 will be a leap year. → ✅ True (Proposition - factual
statement)
4. P(x): x + 6 = 7 → ❌ Not a proposition (contains
variable, not a complete statement)
3. Sun rises in the west. → ✅ False (Proposition, even if false)
4. P(5): 5 + 6 = 2 → ✅ False (Proposition since it is a complete, 5. x > 10 → ❌ Not a proposition (depends on x,
testable statement) incomplete)
5. Apples are oranges. → ✅ False (Proposition) 6. Open the door. → ❌ Command, not a proposition
6. Grapes are black. → ✅ Can be True or False, but still a 7. Do not go there. → ❌ Command, not a
proposition proposition
7. Two and two makes 4. → ✅ True (Proposition)
8. This sentence is true. → ❌ Self-referential, not a
8. Mumbai is in India. → ✅ True (Proposition)
valid proposition
9. I always tell truth. → ✅ Proposition (Though it's debatable,
it can be true/false) 9. This sentence is false. → ❌ Paradox, not a valid
10.I always tell lie. → ✅ Proposition (Paradoxical, but still a proposition
proposition)
11.It will rain tomorrow. → ✅ Proposition (Truth value ✅ Summary of Propositions:
depends on tomorrow)
12.It will rain tomorrow. → ✅ Same as above - Proposition ✔ 1, 2, 3, 5, 6, 7, 8, 11, 12, 13, 18, 19, 20
13.Fan is rotating. → ✅ Proposition (Can be true or false ❌ Not Propositions:
depending on the situation) • ✖ 4, 9, 10, 14, 15, 16
Limitations of Propositional Logic
• We cannot represent relations like ALL, some, or none with propositional
logic. Example:

• All the girls are intelligent.


• Some apples are sweet.

• Propositional logic has limited expressive power.


Rules of Inference in Artificial intelligence
• Inference:
In artificial intelligence, we need intelligent computers which can create new logic from old
logic or by evidence, so generating the conclusions from evidence and facts is termed as
Inference.
• Inference rules:
- Inference rules are the templates for generating valid arguments. Inference rules are applied to
derive proofs in artificial intelligence, and the proof is a sequence of the conclusion that leads
to the desired goal.
- In inference rules, the implication among all the connectives plays an important role.
Following are some terminologies related to inference rules:
• Implication: It is one of the logical connectives which can be represented as P → Q. It is a
Boolean expression.
• Converse: The converse of implication, which means the right-hand side proposition goes to
the left-hand side and vice-versa. It can be written as Q → P.
• Contrapositive: The negation of converse is termed as contrapositive, and it can be
represented as ¬ Q → ¬ P.
From the above term some of the compound statements are equivalent to
each other, which we can prove using truth table:

Hence from the above truth table, we can prove that P → Q is equivalent to
¬ Q → ¬ P, and Q→ P is equivalent to ¬ P → ¬ Q.
1. Modus Ponens:
The Modus Ponens rule is one of the most important rules of inference, and it states that if P
and P → Q is true, then we can infer that Q will be true. It can be represented as:

Example:
Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be true.

Proof by Truth table:


2. Modus Tollens:
The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬ P will
also true. It can be represented as:

Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I do not go to the bed."==> ~Q
Statement-3: Which infers that "I am not sleepy" => ~P
Proof by Truth table:
3. Hypothetical Syllogism:

The Hypothetical Syllogism rule state that if P→R is true whenever P→Q is
true, and Q→R is true. It can be represented as the following notation:
Example:
Statement-1: If you have my home key then you can unlock my home. P→Q
Statement-2: If you can unlock my home then you can take my
money. Q→R
Conclusion: If you have my home key then you can take my money. P→R
Proof by truth table:
4. Disjunctive Syllogism:
• The Disjunctive syllogism rule state that if P ∨Q is true, and ¬P is true, then
Q will be true. It can be represented as:

• Example:
• Statement-1: Today is Sunday or Monday. ==>P∨Q
Statement-2: Today is not Sunday. ==> ¬P
Conclusion: Today is Monday. ==> Q
• Proof by truth-table:
5. Addition:
The Addition rule is one the common inference rule, and it states that If P is true, then P ∨Q
will be true.

Example:
Statement: I have a vanilla ice-cream. ==> P
Statement-2: I have Chocolate ice-cream. => Q
Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)
Proof by Truth-Table:
6. Simplification:
The simplification rule state that if P∧ Q is true, then Q or P will also be true. It can be
represented as:

Proof by Truth-Table:
7. Resolution:
The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q ∨R will also be
true. It can be represented as

Proof by Truth-Table:
Knowledge Engineering in First-Order Logic
What is knowledge-engineering?
• The process of constructing a knowledge-base in first-order logic is called as knowledge- engineering.
In knowledge-engineering, someone who investigates a particular domain, learns important concept of
that domain, and generates a formal representation of the objects, is known as knowledge engineer.
The knowledge-engineering process:
1. Identify the task
2. Assemble the relevant knowledge
3. Decide on vocabulary
4. Encode general knowledge about the domain
5. Encode a description of the problem instance
6. Pose queries to the inference procedure and get answers
7. Debug the knowledge base
First Order Logic: Syntax and Semantic
• User defines these primitives:
Constant symbols (i.e., the "individuals" in the world)
E.g., Mary, 3
Function symbols (mapping individuals to individuals)
E.g., father-of(Mary) = John, color-of(Sky) = Blue
Predicate symbols (mapping from individuals to truth values)
E.g., greater(5,3), green(Grass), color(Grass, Green)

• FOL supplies these primitives:


Variable symbols. E.g., x, y
Connectives. Same as in PL: not (~), and (^), or (v), implies (=>), if and only if (<=>)
Quantifiers: Universal ()ⱻand Existential (ⱻ)
• Universal quantification distributes over conjunction ("and") in that (⩝ x):P(x) means that P holds for all
values of x in the domain associated with that variable.
E.g., (⩝x) dolphin(x) => mammal(x)

• Existential quantification distributes over disjunction ("or") in that (ⱻx)P(x) means that P holds for some
value of x in the domain associated with that variable.
E.g., (ⱻx) mammal(x) ^ lays-eggs(x)

• Universal quantifiers usually used with "implies" to form "if-then rules."


E.g., (⩝x) IT-student(x) => smart(x) means "All IT students are smart.
“You rarely use universal quantification to make blanket statements about every individual in the world: ( ⩝
x) IT-student(x) ^ smart(x) meaning that everyone in the world is a IT student and is smart.

• Existential quantifiers usually used with "and" to specify a list of properties or facts about an individual.
E.g., (ⱻx) IT-student(x) ^ smart(x) means "there is a IT student who is smart."
A common mistake is to represent this English sentence as the FOL sentence:
(ⱻx) IT-student(x) => smart(x) But consider what happens when there is a person who is NOT a IT-student.
Examples
1. All birds fly.
- In this question the predicate is "fly(bird)."
- And since there are all birds who fly so it will be represented as follows.
∀x bird(x) →fly(x).

2. Every man respects his parent.


- In this question, the predicate is "respect(x, y)," where x=man, and y= parent.
- Since there is every man so will use ∀, and it will be represented as follows:
∀x man(x) → respects (x, parent).
Examples
3. Some boys play cricket.
- In this question, the predicate is "play(x, y)," where x= boys, and y= game.
- Since there are some boys so we will use ∃, and it will be represented as:
∃x boys(x) ∧ play(x, cricket).

4. Not all students like both Mathematics and Science.


- In this question, the predicate is "like(x, y)," where x= student, and y= subject.
- Since there are not all students, so we will use ∀ with negation, so following
representation for this:
¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].
• Switching the order of universal quantifiers does not change the meaning:
• (⩝ x)(⩝ y)P(x,y) is logically equivalent to (⩝ y)(⩝ x)P(x,y).
Similarly, you can switch the order of existential quantifiers.

• Switching the order of universals and existential does change meaning:


- Everyone likes someone: (⩝ x):(ⱻy):likes(x,y)
- Someone likes everyone: (ⱻy)(⩝ x)likes(x,y)
• Translating English to First Order Logic (FOL)
• Some examples:
Every gardener likes the sun.
• (⩝ x): gardener(x) => likes(x,Sun)

You can fool some of the people all of the time.


• (ⱻ x): (person(x) ^ (⩝ t)(time(t) => can-fool(x,t)))

You can fool all of the people some of the time.


• (⩝ x): (person(x) => (ⱻ t) (time(t) ^ can-fool(x,t)))

All purple mushrooms are poisonous.


• (⩝ x): (mushroom(x) ^ purple(x)) => poisonous(x)
No purple mushroom is poisonous.
• ~(ⱻx): purple(x) ^ mushroom(x) ^ poisonous(x)
Or, equivalently,
• (⩝ x): (mushroom(x) ^ purple(x)) => ~poisonous(x)

There are exactly two purple mushrooms.


• ( x):(ⱻy): mushroom(x) ^ purple(x) ^ mushroom(y) ^ purple(y) ^ ~(x=y) ^ ( ⩝ z) (mushroom(z) ^
purple(z)) => ((x=z) v (y=z))
∃x ∃y: There exist two mushrooms x and y
Mushroom(x) ∧ Purple(x) ∧ Mushroom(y) ∧ Purple(y): Both x and y are purple mushrooms
x ≠ y: They are distinct
∀z (Mushroom(z) ∧ Purple(z) → (z = x ∨ z = y)): Any purple mushroom z must be either x or y (no third
one)

Deb is not tall.


• ~tall(Deb)
Unification
• Unification is a process of making two different logical atomic expressions
identical by finding a substitution. Unification depends on the substitution
process.
• The term Most General Unifier is the simplest substitution that unifies two
expressions without adding unnecessary constraints.

• It takes two literals as input and makes them identical using substitution.

• Let Ψ1 and Ψ2 be two atomic sentences and 𝜎 be a unifier such that,

• Ψ1𝜎 = Ψ2𝜎, then it can be expressed as UNIFY(Ψ1, Ψ2).


Why MGU?
• Ensures correctness and generality of logical inference
• Helps AI reason, infer, and answer queries efficiently
• Essential for expert systems, NLP, Prolog, semantic web reasoning
• Avoids unnecessary constraints, keeps the solution general
Example:
Find the Most General Unifier(MGU) for Unify{King(x), King(John)}

• Let Ψ1 = King(x), Ψ2 = King(John),

• Substitution θ = {John/x} is a unifier for these atoms and applying this


substitution, and both expressions will be identical.

• Ψ1 = King(John), Ψ2 = King(John),
Implementation of the Algorithm
• Step.1: Initialize the substitution set to be empty.

• Step.2: Recursively unify atomic sentences:

1.Check for Identical expression match.

2. If one expression is a variable vi, and the other is a term ti which does not contain
variable vi, then:
1. Substitute ti / vi in the existing substitutions
2. Add ti /vi to the substitution setlist.
3. If both the expressions are functions, then function name must be similar, and the number
of arguments must be the same in both the expression.
Examples
1. Find the MGU of {p(f(a), g(Y)) and p(X, X)}
Sol: S0 => Here, Ψ1 = p(f(a), g(Y)), and Ψ2 = p(X, X)
SUBST θ= {f(a) / X}
S1 => Ψ1 = p(f(a), g(Y)), and Ψ2 = p(f(a), f(a))
SUBST θ= {f(a) / g(y)}, Unification failed.
2. Find the MGU of {p(b, X, f(g(Z))) and p(Z, f(Y), f(Y))}

Here, Ψ1 = p(b, X, f(g(Z))) , and Ψ2 = p(Z, f(Y), f(Y))


S0 => { p(b, X, f(g(Z))); p(Z, f(Y), f(Y))}
SUBST θ={b/Z}

S1 => { p(b, X, f(g(b))); p(b, f(Y), f(Y))}


SUBST θ={f(Y) /X}

S2 => { p(b, f(Y), f(g(b))); p(b, f(Y), f(Y))}


SUBST θ= {g(b) /Y}

S2 => { p(b, f(g(b)), f(g(b)); p(b, f(g(b)), f(g(b))} Unified Successfully.

Unifier = { b/Z, f(Y) /X , g(b) /Y}.


3. Find the MGU of UNIFY(prime (11), prime(y))

Here, Ψ1 = {prime(11) , and Ψ2 = prime(y)}

S0 => {prime(11) , prime(y)}

SUBST θ= {11/y}

S1 => {prime(11) , prime(11)} , Successfully unified.

Unifier: {11/y}.
4. Find the MGU of {p (X, X), and p (Z, f(Z))}

Here, Ψ1 = {p (X, X), and Ψ2 = p (Z, f(Z))

S0 => {p (X, X), p (Z, f(Z))}

SUBST θ= {Z/X}

S1 => {p (Z, Z), p (Z, f(Z))}

SUBST θ= {f(Z) / Z}

S1 => {p (f(Z), f(Z)), p (f(Z), f(f(Z)))} Unification Failed


Resolution
• Resolution is used, if there are various statements are given, and we need
to prove a conclusion of those statements. Unification is a key concept in
proofs by resolutions. Resolution is a single inference rule which can
efficiently operate on the conjunctive normal form or clausal form.

• Clause: Disjunction of literals (an atomic sentence) is called a clause. It


is also known as a unit clause.

• Conjunctive Normal Form: A sentence represented as a conjunction of


clauses is said to be conjunctive normal form or CNF.
Normal Forms in propositional Logic
1. Conjunctive normal form (CNF):

e.g.( P ˅ Q ˅ R ) ˄ (P ˅ Q ) ˄ (P ˅ R ) ˄ P

It is conjunction (˄) of disjunctions (˅)

Where disjunctions are:


1. (P˅Q˅R)
2. (P ˅ Q )
clauses
3. (P ˅ R )
4. P
2. Disjunctive normal form (DNF):

e.g. ( P ˄ Q ˄ R ) ˅ (P ˄ Q ) ˅ (P ˄ R ) ˅ P

It is disjunction (˅) of conjunctions (˄)


Conversion to CNF example
Q. Convert into CNF : ( ( P->Q )->R )

Solution:
Step 1: ( ( P->Q )->R ) ( ( ‫ך‬P ˅ Q)->R)
==> ‫ך ( ך‬P ˅ Q) ˅
==> R
Step 2: ‫ך ( ך‬P ˅ Q) ˅ R ==> (P ˄ ‫ ך‬Q ) ˅ R

Step 3: (P ˄ ‫ ך‬Q ) ˅ R ==> ( P ˅ R ) ˄ (‫ ך‬Q ˅ R )

CNF
Steps for Resolution:
1.Conversion of facts into first-order logic.
2.Convert FOL statements into CNF
3.Negate the statement which needs to prove (proof by contradiction)
4.Draw resolution graph (unification).
• 1) Eliminate implication ‘=>’
• a=>b = ~a V b
• (a˄b)=>c =~(a˄b) ˅ c

• 2) Eliminate ‘˄’
• a˄b=a
b
• a ˄ b˄ ~c=a
b
~c
• a˄(b ˅ c) = a
b˅c
• aV(b˄c) = aVb
aVc
3) Eliminate ‘ ⱻ’
• We can eliminate ⱻ by substituting for the variable a reference to a function that
produces a desired value.
• E.g. There is a teacher
• ⱻ x: Teacher(x) ,then by eliminating ‘ⱻ’ we get,
• “Teacher (s1)” where s1 is a function that somehow produces a value that satisfies the
predicate teacher.

4) Eliminate ‘⩝’
• To eliminate ‘⩝’ , convert the fact into prefix normal form in which all the universal
quantifiers are at the beginning of formula.
• Eg. All students are intelligent.
• ⩝ x: Student(x)->Intelligent(x)
• After eliminating ⩝ x we get,
• Student(x)->Intelligent(x).
Q.1) consider following set of facts.
I. Rani is hungry.
II. If rani is hungry she barks.
III. If rani is barking then raja is angry.
Convert into proposition logic statements.
And prove that ‘Raja is Angry’ by resolution
Sol:
step1: we can use following propositional symbols
P: Rani is hungry
Q: Rani is Barking
R: Raja is Angry.
Step2: The propositional logic statements are,
I. P
II. P=>Q
III. Q=>R

Step 3: Convert logic into CNF


I. P
II. ~P V Q
III. ~Q V R
Step 4: Negate the conclusion
• Raja is angry: R
• After Negation raja is not Angry i.e. ~R

Step 5: resolution tree

Thus ,we get empty string and we can conclude that ‘Raja is angry’
Example
a. John likes all kind of food.
b. Apple and vegetable are food
c. Anything anyone eats and not killed is food.
d. Anil eats peanuts and still alive
e. Harry eats everything that Anil eats.

Prove by resolution that:


f. John likes peanuts.
Step-1: Conversion of Facts into FOL
a. John likes all kind of food.
b. Apple and vegetable are food
c. Anything anyone eats and not killed is
food.
d. Anil eats peanuts and still alive
e. Harry eats everything that Anil eats.
Step-2: Conversion of FOL into CNF
• Eliminate all implication (→) and
rewrite
a. ∀x ¬ food(x) V likes(John, x)
b. food(Apple) Λ food(vegetables)
c. ∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)
d. eats (Anil, Peanuts) Λ alive(Anil)
e. ∀x ¬ eats(Anil, x) V eats(Harry, x)
f. ∀x¬ [¬ killed(x) ] V alive(x)
g. ∀x ¬ alive(x) V ¬ killed(x)
h. likes(John, Peanuts).
Step-2: Conversion of FOL into CNF
• Eliminate all implication (→) Move negation (¬)inwards and
rewrite (Demorgan’s law)
and rewrite
a. ∀x ¬ food(x) V likes(John, x)
a. ∀x ¬ food(x) V likes(John, x)
b. food(Apple) Λ food(vegetables)
b. food(Apple) Λ food(vegetables)
c. ∀x ∀y ¬ eats(x, y) V killed(x) V
c. ∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)
food(y)
d. eats (Anil, Peanuts) Λ alive(Anil)
d. eats (Anil, Peanuts) Λ alive(Anil)
e. ∀x ¬ eats(Anil, x) V eats(Harry, x)
e. ∀x ¬ eats(Anil, x) V eats(Harry, x)
f. ∀x killed(x) V alive(x)
f. ∀x¬ [¬ killed(x) ] V alive(x)
g. ∀x ¬ alive(x) V ¬ killed(x)
g. ∀x ¬ alive(x) V ¬ killed(x)
h. likes(John, Peanuts).
h. likes(John, Peanuts).
Step-2: Conversion of FOL into CNF
Move negation (¬)inwards and Rename variables or standardize
rewrite variables
a. ∀x ¬ food(x) V likes(John, x) a. ∀x ¬ food(x) V likes(John, x)
b. food(Apple) Λ food(vegetables) b. food(Apple) Λ food(vegetables)
c. ∀x ∀y ¬ eats(x, y) V killed(x) V c. ∀y ∀z ¬ eats(y, z) V killed(y) V food(z)
food(y) d. eats (Anil, Peanuts) Λ alive(Anil)
d. eats (Anil, Peanuts) Λ alive(Anil) e. ∀w¬ eats(Anil, w) V eats(Harry, w)
e. ∀x ¬ eats(Anil, x) V eats(Harry, x) f. ∀g killed(g) V alive(g)
f. ∀x killed(x) V alive(x) g. ∀k ¬ alive(k) V ¬ killed(k)
g. ∀x ¬ alive(x) V ¬ killed(x) h. likes(John, Peanuts).
h. likes(John, Peanuts).
Step-2: Conversion of FOL into CNF
Rename variables or standardize • Eliminate existential instantiation
variables quantifier by elimination.
a. ∀x ¬ food(x) V likes(John, x)
b. food(Apple) Λ food(vegetables) In this step, we will eliminate
c. ∀y ∀z ¬ eats(y, z) V killed(y) V food(z) existential quantifier ∃, and this
d. eats (Anil, Peanuts) Λ alive(Anil) process is known as Skolemization.
e. ∀w¬ eats(Anil, w) V eats(Harry, w) But in this example problem since
f. ∀g killed(g) V alive(g) there is no existential quantifier so all
g. ∀k ¬ alive(k) V ¬ killed(k) the statements will remain same in this
h. likes(John, Peanuts). step.
Step-2: Conversion of FOL into CNF
Rename variables or standardize • Drop Universal quantifiers.
variables In this step we will drop all universal
quantifier since all the statements are not
a. ∀x ¬ food(x) V likes(John, x)
implicitly quantified so we don't need it.
b. food(Apple) Λ food(vegetables)
a. ¬ food(x) V likes(John, x)
c. ∀y ∀z ¬ eats(y, z) V killed(y) V food(z) b. food(Apple)
d. eats (Anil, Peanuts) Λ alive(Anil) c. food(vegetables)
e. ∀w¬ eats(Anil, w) V eats(Harry, w) d. ¬ eats(y, z) V killed(y) V food(z)
f. ∀g killed(g) V alive(g) e. eats (Anil, Peanuts)
g. ∀k ¬ alive(k) V ¬ killed(k) f. alive(Anil)
h. likes(John, Peanuts). g. ¬ eats(Anil, w) V eats(Harry, w)
h. killed(g) V alive(g)
i. ¬ alive(k) V ¬ killed(k)
j. likes(John, Peanuts).
Step-2: Conversion of FOL into CNF
a. ¬ food(x) V likes(John, x) • Distribute conjunction ∧ over
b. food(Apple) disjunction ˅.
c. food(vegetables)
d. ¬ eats(y, z) V killed(y) V food(z) This step will not make any change in
e. eats (Anil, Peanuts) this problem.
f. alive(Anil)
g. ¬ eats(Anil, w) V eats(Harry, w)
h. killed(g) V alive(g)
i. ¬ alive(k) V ¬ killed(k)
j. likes(John, Peanuts).
Step-3: Negate the statement to be proved
a. ¬ food(x) V likes(John, x) • Apply negation to the conclusion
b. food(Apple) statements, which will be written as
c. food(vegetables) ¬likes(John, Peanuts)
d. ¬ eats(y, z) V killed(y) V food(z)
e. eats (Anil, Peanuts)
f. alive(Anil)
g. ¬ eats(Anil, w) V eats(Harry, w)
h. killed(g) V alive(g)
i. ¬ alive(k) V ¬ killed(k)
j. likes(John, Peanuts).
Step-4: Draw Resolution graph:
a. ¬ food(x) V likes(John, x)
b. food(Apple)
c. food(vegetables)
d. ¬ eats(y, z) V killed(y) V food(z)
e. eats (Anil, Peanuts)
f. alive(Anil)
g. ¬ eats(Anil, w) V eats(Harry, w)
h. killed(g) V alive(g)
i. ¬ alive(k) V ¬ killed(k)
j. ¬ likes(John, Peanuts).
Exercise Problems
Q2.Consider following facts
1. It is Humid
2. If it is Humid then it is hot
3. It it is hot and humid then it will rain.
Prove that “It will Rain”

Q.3 Consider following facts


1. If maid stole the jewelry, then butler was not guilty.
2. Either maid stole jewelry or she milk the cow
3. If maid milked the cow then butler got the cream.
4. Therefore if butler was guilty then he got the cream.
Prove that the conclusion ‘step 4’ is valid using resolution.
Inference in FOL
• Inference means deriving new sentences from old. Inference system allows
us to add a new sentence to the knowledge base. A sentence is a proposition
about the world. Inference system applies logical rules to the KB to deduce
new information.

• Inference system generates new facts so that an agent can update the KB. An
inference system works mainly in two rules which are given as:

• Forward chaining
• Backward chaining
Forward chaining, backward Chaining
• Forward Chaining
• Forward chaining is a data driven method of deriving a particular goal from
a given knowledge base and set of inference rules
• It is a down-up approach, as it moves from bottom to top.
• It is a process of making a conclusion based on known facts or data, by
starting from the initial state and reaches the goal state.
• The application of inference rules results in new knowledge which is then
added to the knowledge base.
• Inference rules are successively applied to elements of the knowledge base
until the goal is reached
• Example:
• Knowledge Base:
– If [X croaks and eats flies] Then [X is a frog]
– If [X chirps and sings] Then [X is a canary]
– If [X is a frog] Then [X is colored green]
– If [X is a canary] Then [X is colored yellow]
– [Fritz croaks and eats flies]
• Goal:
– [Fritz is colored Y]?
• Solution:
Step 1:
• Step 2: New rule “Fritz is a frog” is added to knowledge base
• Step 3: Again applying inference rule between “If [X is a frog] Then [X
is colored green]” and “[Fritz is a frog]”.
• Step 4: New rule is added to knowledge base. Every resulted rule
must compare with goal.
• Backward Chaining
• Backward chaining is a goal driven method of deriving a particular goal
from a given knowledge base and set of inference rules
• It is known as a top-down approach.
• In backward chaining, the goal is broken into sub-goal or sub-goals to
prove the facts true.
• It is called a goal-driven approach, as a list of goals decides which rules
are selected and used.
• When such a relation is found, the antecedent of the relation is added to
the list of goals (and not into the knowledge base, as is done in forward
chaining)
• Search proceeds in this manner until a goal can be matched against a fact
in the knowledge base
• Example: Step 1: Apply inference rule between knowledge and goal.
• Step 2: Unlike forward chaining ,new rule get added to goal.
• Step 3: final step
Example
"As per the law, it is a crime for an American to sell weapons to hostile
nations. Country A, an enemy of America, has some missiles, and all the
missiles were sold to it by Robert, who is an American citizen."

Prove that "Robert is criminal."


Conversion of Facts into FOL
a. It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(a)
b. Country A has some missiles.
ⱻ p Owns(A, p) ∧ Missile(p)
It can be written in two definite clauses by using Existential Instantiation, introducing new Constant T1.
Owns(A, T1) ......(b1)
Missile(T1) .......(b2)
c. All of the missiles were sold to country A by Robert.
∀ p (Missiles(p) ∧ Owns (A, p)) → Sells (Robert, p, A) ......(c)
d. Missiles are weapons.
Missile(p) → Weapons (p) .......(d)
e. Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(e)
f. Country A is an enemy of America.
Enemy (A, America) .........(f)
g. Robert is American
American(Robert). ..........(g)
Forward Chaining
a. American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p)
b. Owns(A, T1)
c. Missile(T1)
d. ∀ p (Missiles(p) ∧ Owns (A, p)) → Sells (Robert, p, A)
e. Missile(p) → Weapons (p)
f. Enemy(p, America) →Hostile(p)
g. Enemy (A, America) {Robert / P}
h. American(Robert).

{T1 / q} {A / r}

{A / P}
{T1 / P} {T1 / P}
Backward Chaining
a. American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p)
b. Owns(A, T1)
c. Missile(T1)
d. ∀ p (Missiles(p) ∧ Owns (A, p)) → Sells (Robert, p, A)
e. Missile(p) → Weapons (p)
f. Enemy(p, America) →Hostile(p)
g. Enemy (A, America)
h. American(Robert).

{A / r}

{T1 / q}
S. Forward Chaining Backward Chaining
No.
1. Forward chaining starts from known Backward chaining starts from the goal and
facts and applies inference rule to works backward through inference rules to find
extract more data until it reaches to the the required facts that support the goal.
goal.
2. It is a bottom-up approach It is a top-down approach
3. Forward chaining is known as data- Backward chaining is known as goal-driven
driven inference technique as we reach technique as we start from the goal and divide
to the goal using the available data. into sub-goal to extract the facts.
4. Forward chaining reasoning applies a Backward chaining reasoning applies a depth-
breadth-first search strategy. first search strategy.
5. Forward chaining is suitable for the Backward chaining is suitable for diagnostic,
planning, monitoring, control, and prescription, and debugging application.
interpretation application.
6. It operates in the forward direction. It operates in the backward direction.
Example
• The law says that it is a crime for an American to sell weapons to
hostile nations. The country Nono, an enemy of America, has some
missiles, and all of its missiles were sold to it by Colonel West, who is
American.

• Prove that Col. West is a criminal


Solution:
• It is a crime for an American to sell weapons to hostile nations:
American(x) ∧ Weapon(y) ∧ Sells(x,y,z) ∧ Hostile(z) ⇒ Criminal(x)
• Nono … has some missiles,
i.e., ∃x Owns(Nono,x) ∧ Missile(x):
Owns(Nono,M1)
Missile(M1)
• All of its missiles were sold to it by Colonel West
Missile(x) ∧ Owns(Nono,x) ⇒ Sells(West,x,Nono)
• Missiles are weapons:
Missile(x) ⇒ Weapon(x)
• An enemy of America counts as "hostile“
Enemy(x,America) ⇒ Hostile(x)
• West, who is American …
American(West)
• The country Nono, an enemy of America …
Enemy(Nono,America)
Forward Chaining
Backward Chaining
Proof By Resolution:
Negation of Quantifiers
• ~(All birds fly) = Some birds do not fly.
~∀x Bird(x) -> fly(x) = ⱻx Bird(x) ˄ ~ fly (x)
• No Some birds are bigger than elephants = No birds are bigger than
elephants
~ ⱻx Bird(x) ˄ Bigger (x, elephant) =
∀x Bird(x) -> ~ Bigger (x, elephant)
Conversion to CNF

( )
Uncertain Knowledge and Reasoning:
Uncertainty
• Uncertainty
Many times in complex world theory of agent and events in environment are contradicted to each other, and
this result in reduction of performance measure. E.g. “let agent’s job is to leave the passenger on time, before
the flight departs. But agent knows the problems it can face during journey. Means Traffic, flat tire, or accident.
In these cases agent cannot give its full performance. ”This is called as uncertainty.
• Probability:
• Objective probability
-Averages over repeated experiments of random events
• o E.g. estimate P (Rain) from historical observation
-Makes assertions about future experiments
-New evidence changes the reference class

• Subjective / Bayesian probability


-Degrees of belief about unobserved event: state of knowledge
The semantics of belief network
• Probability Basics
• Priori probability
The prior probability of an event is the probability of the event computed before the collection of new data.
One begins with a prior probability of an event and revises it in the light of new data. For example, if 0.01 of
a population has schizophrenia then the probability that a person drawn at random would have
schizophrenia is 0.01. This is the prior probability. If you then learn that that there score on a personality
test suggests the person is schizophrenic, you would adjust your probability accordingly. The adjusted
probability is the posterior probability.
• Bayes' Theorem:
Bayes' theorem considers both the prior probability of an event and the diagnostic value of a test to
determine the posterior probability of the event. The theorem is shown below:

where P(D|T) is the posterior probability of Diagnosis D given Test result T, P(T|D) is the conditional
probability of T given D, P(D) is the prior probability of D, P(T|D') is the conditional probability of T given
not D, and P(D') is the probability of not D'.
Simple Inference in belief network
• Example 1
• Suppose that there are two events which could cause grass to be wet: either the sprinkler is on or it's
raining. Also, suppose that the rain has a direct effect on the use of the sprinkler (namely that when it
rains, the sprinkler is usually not turned on). Then the situation can be modeled with a Bayesian
network (shown). All three variables have two possible values, T (for true) and F (for false).
• The joint probability function is:
• P(G,S,R) = P(G | S,R)P(S | R)P(R)
• where the names of the variables have been abbreviated to G = Grass wet, S = Sprinkler, and R = Rain.
• The model can answer questions like "What is the probability that it is raining, given the grass is wet?"
by using the conditional probability formula and summing over all nuisance variables:
• Example 2:
• Assume your house has an alarm system against burglary. You live in the seismically active area and the
alarm system can get occasionally set off by an earthquake. You have two neighbors, Mary and John,
who do not know each other. If they hear the alarm they call you, but this is not guaranteed.
• We want to represent the probability distribution of events: – Burglary, Earthquake, Alarm, Mary calls
and John calls
1. Directed acyclic graph
• Nodes = random variables Burglary, Earthquake, Alarm, Mary calls and John calls
• Links = direct (causal) dependencies between variables. The chance of Alarm is influenced by Earthquake, The
chance of John calling is affected by the Alarm

2. Local conditional distributions


• relate variables and their parents
3. In the BBN the full joint distribution is expressed using a set of local conditional distributions
• Resolution example
• https://www.youtube.com/watch?v=7g6cB3kIHJI
• https://www.youtube.com/watch?v=nEEyPdYxBFY

You might also like