Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
3 views11 pages

Unit-3 Notes Part-1

This document covers First-Order Logic (FOL), detailing its syntax, semantics, and applications in knowledge engineering. It contrasts FOL with Propositional Logic, emphasizing FOL's greater expressive power and its use of quantifiers and functions. The document also explains inference mechanisms such as forward chaining and backward chaining, alongside the unification algorithm for matching predicates.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views11 pages

Unit-3 Notes Part-1

This document covers First-Order Logic (FOL), detailing its syntax, semantics, and applications in knowledge engineering. It contrasts FOL with Propositional Logic, emphasizing FOL's greater expressive power and its use of quantifiers and functions. The document also explains inference mechanisms such as forward chaining and backward chaining, alongside the unification algorithm for matching predicates.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

UNIT III: Knowledge and Reasoning

First-Order Logic - Syntax and Semantics of First-Order Logic, Using First Order Logic,
Knowledge Engineering in First-Order Logic.
Inference in First-Order Logic - Propositional vs. First-Order Inference, Unification and
Lifting, Forward Chaining, Backward Chaining, Resolution.
Knowledge Representation: Ontological Engineering, Categories and Objects, Events,
Reasoning Systems for Categories, Reasoning with Default Information
Why First-Order Logic (FOL)?
 Propositional Logic (PL) is declarative, compositional, and context-independent.
 But PL has limited expressive power:
o Cannot represent objects, relations, or general rules compactly.
o E.g., “pits cause breezes in adjacent squares” requires many sentences in PL.
FOL extends PL: It assumes the world contains:
 Objects (people, houses, numbers, …)
 Relations (brother of, bigger than, red, …)
 Functions (father of, best friend, …)

Syntax of FOL

Basic Elements

 Constants → KingJohn, 2, NUS


 Predicates → Brother(x, y), >(x, y)
 Functions → Sqrt(x), LeftLegOf(y)
 Variables → x, y, a, b
 Connectives → ¬, ∧, ∨, ⇒, ⇔
 Equality → (=)
 Quantifiers → ∀ (for all), ∃ (there exists)

Sentences in FOL

 Atomic Sentences

 Predicate(terms) or equality.
 Example: Brother(KingJohn, RichardTheLionheart).
 Complex Sentences

 Built from atomic sentences using connectives.


 Example:
o Sibling(KingJohn, Richard) ⇒ Sibling(Richard, KingJohn)
o (1,2) ∨ ≤(1,2)

Semantics: Truth in FOL


 Sentences are true w.r.t. a model and interpretation.
 Model → set of objects and relations.
 Interpretation → mapping of:
o constants → objects
o predicates → relations
o functions → functional relations

An atomic sentence is true if the referred objects are in the relation defined by the predicate.

Quantifiers

Universal Quantifier (∀)

 ∀x P(x) → P is true for all objects in the model.


 Example: ∀x At(x,NUS) ⇒ Smart(x).
 Common mistake:
o ∀x (At(x,NUS) ∧ Smart(x)) → “Everyone is at NUS AND everyone is smart.”

Existential Quantifier (∃)

 ∃x P(x) → P is true for some object in the model.


 Example: ∃x (At(x,NUS) ∧ Smart(x)).
 Equivalent to disjunction of instantiations.
 Common mistake:
o ∃x (At(x,NUS) ⇒ Smart(x)) → true if anyone is NOT at NUS.

Properties

• x y is the same as y x
• x y is the same as y x
• x y is not the same as y x
• x y Loves(x,y)
– “There is a person who loves everyone in the world”
• y x Loves(x,y)
– “Everyone in the world is loved by at least one person”
• Quantifier duality: each can be expressed using the other
• x Likes(x,IceCream) x Likes(x,IceCream)
• x Likes(x,Broccoli) x Likes(x,Broccoli)

Using FOL

Kinship Example

 Brothers are siblings: ∀x,y Brother(x,y) ⇔ Sibling(x,y).


 Mother: ∀m,c Mother(c) = m ⇔ (Female(m) ∧ Parent(m,c)).
 Sibling is symmetric: ∀x,y Sibling(x,y) ⇔ Sibling(y,x).

Set Theory Example


• The only sets are the empty set and those made by adjoining something to a set:

s Set(s)  (s = {} )  (x,s2 Set(s2)  s = {x|s2})

• The empty set has no elements adjoined into it. In other words, there is no way to
decompose {} into a smaller set and an element:

x,s {x|s} = {}

• Adjoining an element already in the set has no effect:

x,s x  s  s = {x|s}

• The only members of a set are the elements that were adjoined into it. We express this
recursively, saying that x is a member of s if and only if s is equal to some set s2
adjoined with some element y, where either y is the same as x or x is a member of s2:

x,s x  s  [ y,s2 (s = {y|s2}  (x = y  x  s2))]

• A set is a subset of another set if and only if all of the first set‟s members are
members of the second set:

s1,s2 s1  s2  (x x  s1  x  s2)

• Two sets are equal if and only if each is a subset of the other:

s1,s2 (s1 = s2)  (s1  s2  s2  s1)

• An object is in the intersection of two sets if and only if it is a member of both sets:

x,s1,s2 x  (s1  s2)  (x  s1  x  s2)

• An object is in the union of two sets if and only if it is a member of either set:

x,s1,s2 x  (s1  s2)  (x  s1  x  s2)

Knowledge Engineering in FOL


Steps:
1. Identify the task.
2. Assemble relevant knowledge.
3. Decide vocabulary (predicates, functions, constants).
4. Encode general domain knowledge.
5. Encode specific problem instance.
6. Pose queries.
7. Debug KB.

Example: Refer to the PPT


Propositional vs. First-Order Inference

• Propositional logic is declarative


• Propositional logic allows partial/disjunctive/negated information
– (unlike most data structures and databases)
• Propositional logic is compositional:
– meaning of B1,1  P1,2 is derived from meaning of B1,1 and of P1,2
• Meaning in propositional logic is context-independent
– (unlike natural language, where meaning depends on context)
• Propositional logic has very limited expressive power
– (unlike natural language)
– E.g., cannot say "pits cause breezes in adjacent squares“
• except by writing one sentence for each square

Aspect Propositional Inference First-Order Inference


Knowledge Only facts (true/false) Objects, relations, functions,
Representation quantifiers
Variables Not allowed Allowed (x, y, …)
Quantifiers Not supported Supports ∀ (universal), ∃
(existential)
Inference Mechanism Truth tables, resolution on Unification + resolution,
atomic facts substitution rules
Expressiveness Low – must write separate facts High – general rules apply to
many objects
Example "Pits cause breezes" needs one ∀x (Pit(x) →
fact per square Breezy(adjacent(x)))
Efficiency Simpler but less powerful More powerful but
computationally harder

Relationship Between Them

 Propositional Logic is a subset of First-Order Logic.


 FOL can express everything PL can, and much more.
 Inference in PL is decidable (can always determine truth).
 Inference in FOL is semi-decidable (may not terminate).

The Unification Algorithm

In predicate logic, this matching process is more complicated since the arguments of the
predicates must be considered.

For example, man(John) and ¬man(John) is a contradiction, while man(John) and


¬man{Spot) is not.

Thus, in order to determine contradictions, we need a matching procedure that compares


two literals and discovers whether there exists a set of substitutions that makes them
identical. The straightforward recursive procedure, called the “unification algorithm” does
this.
Basic idea of unification: It is very simple. To attempt to unify two literals, we first check if
their initial predicate symbols are the same. If so, we can proceed. Otherwise, there is no way
they can he unified, regardless of their arguments.

For example, the two literals: o trytoassassinate( Marcus, Caesar) hate(Marcus, Caesar)
cannot be unified.

If the predicate symbols match, then we must check the arguments one pair at a time. If
the first matches, we can continue with the second, and so on. To test each argument pair, we
can simply call the unification procedure recursively.

The matching rules are simple: Different constants or predicates cannot match; identical
ones can. A variable can match another variable, any constant, or a predicate expression with
the restriction that the predicate expression must not contain any instances of the variable
being matched.

We must find a single, consistent substitution for the entire literal, not separate ones for
each piece of it. To do this, we must take each substitution that we find and apply it to the
remainder of the literals before we continue trying to unify them.

For example, suppose we want to unify the expressions o P(x) and P(y)

The two instances of P match. Next we compare x and y, and decide that if we substitute
y for x, they could match. We will write that substitution as: y/x.

But now, if we simply continue and match x and z, we produce the substitution z/x. But
we cannot substitute both y and z for x. The problem can be solved as follows:

What we need to do after finding the first substitution y/x is to make that
substitution throughout the literals, giving:

Now we can attempt to unify arguments y and z, which succeeds with the
substitution z/y. The entire unification process has now succeeded with a substitution
that is the composition of the two substitutions: (z/y) (y/x).

In general, substitutions: (a1/a2, a3/a4,....)(b1/b2, b3/b4....) means to apply all the
substitutions of the right most list, then take the result and apply all the ones of the
next list, and so forth, until all substitutions have been applied.

The objective of the unification procedure is to discover at least one substitution


that causes two literals to match. o For example, the literals:

hate (x, y)

hate (Marcus, z)

could be unified with any of the following substitutions:

(Marcus/x, z/y)
(Marcus/x, y/z)

(Marcus/x, Caesar/y, Caesar/z)

(Marcus/x, Polonius/y, Polonius/z)

 We describe a procedure Unify (Ll, L2), which returns as its value a list
representing the composition of the substitutions that were performed during the
match. The empty list, NIL, indicates that a match was found without any
substitutions. The list consisting of the single value FAIL indicates that the
unification procedure failed.

Algorithm: Unify (L1, L2)

1. If L1 and L2 are both variables and constants, then:

a. If L1 and L2 are identical, then return NIL.

b. Else if L1 is a variable, then if LI occurs in L2 then return {FAIL}, else return


(L2/L1).

c. Else if L2 is a variable then if L2 occurs in L1 then return {FAIL}, else return


(LI/L2)

d. Else return {FAIL}.

2. If the initial predicate symbols in LI and L2 are not identical, then return {FAIL}.

3. If Ll and L2 have a different number of arguments, then return {FAlL}.

4. Set SUBST to NIL. (At the end of this procedure, SUBST will contain all the substitutions
used to unify L1 and L2).

5. For i1 to number of arguments in LI:

a. Call Unify with the ith argument of LI and ith argument of L2, putting result in S.

b. If contains FAIL, then return {FAIL}.

c. If S not equal to NIL then:

i. Apply S to the remainder of both L1 and L2.

ii. SUBST= APPEND(S, SUBST). 6. Return SUBST.

6. Return SUBST.
Forward Chaining

• Also known as forward deduction or forward reasoning method when using an inference
engine.

• A form of reasoning which starts with atomic sentences in the knowledge base and applies
inference rules (Modus Ponens) in the forward direction to extract more data until a goal is
reached.

• Algorithm starts from known facts, triggers all rules whose premises are satisfied, and add
their conclusion to the known facts. This process repeats until the problem is solved.

Properties of Forward-Chaining:

• It is a down-up approach, as it moves from bottom to top.

• It is a process of making a conclusion based on known facts or data, by starting from the
initial state and reaches the goal state.

• Forward-chaining approach is also called as data-driven as we reach to the goal using


available data.

• Forward-chaining approach is commonly used in the expert system, such as CLIPS, business,
and production rule systems.

What is Forward Chaining?

o A data-driven inference method.


o Starts with known facts in the Knowledge Base (KB).
o Applies inference rules to derive new facts until the goal (query) is found or no new
facts can be inferred.
o Common in production systems, rule-based expert systems, and logic programming.

Algorithm

Procedure - FORWARD-CHAINING(KB, Query):

repeat

for each rule (P1 ∧ P2 ∧ … ∧ Pn → Q) in KB do

if all Pi are known facts and Q is not yet known then

add Q to KB

if Query is in KB then

return SUCCESS

until no new facts are added

return FAILURE
Properties of Forward Chaining

 Strategy: Data-driven (works from facts → conclusions).

 Completeness: Yes (if KB is Horn clauses).

 Soundness: Yes (derives only logically entailed facts).

 Efficiency: Can generate many irrelevant facts if query is far from initial facts.

Example Knowledge Base – Refer to the PPT

Backward Chaining

• Also known as a backward deduction or backward reasoning method when using an


inference engine.

• Algorithm is a form of reasoning, which starts with the goal and works backward,
chaining through rules to find known facts that support the goal.

Properties of backward chaining:

• It is known as a top-down approach.

• Backward-chaining is based on modus ponens inference rule.

• In backward chaining, the goal is broken into sub-goal or sub-goals to prove that facts
are true.

• It is called a goal-driven approach, as a list of goals decides which rules are selected
and used.

• Backward-chaining algorithm is used in game theory, automated theorem proving


tools, inference engines, proof assistants, and various AI applications.

• The backward-chaining method mostly used a depth-first search strategy for proof.

What is Backward Chaining?

 A goal-driven inference method.


 Starts with the query (goal).
 Works backwards, checking which rules can produce the goal.
 Sub-goals are created and solved until:
o They match known facts (SUCCESS ✅), or
o They fail (backtrack ✅).
 This is how logic programming (e.g., Prolog) works.

Backward Chaining Algorithm

Procedure BACKWARD-CHAINING(KB, Goal):


if Goal is in KB as a known fact then

return SUCCESS

for each rule (P1 ∧ P2 ∧ … ∧ Pn → Goal) in KB do

for each premise Pi do

if not BACKWARD-CHAINING(KB, Pi) then

return FAILURE

return SUCCESS

return FAILURE

Properties of Backward Chaining

 Strategy: Goal-driven (works from goal → facts).

 Completeness: Yes (for Horn clauses, like Forward Chaining).

 Soundness: Yes.

 Efficiency: Can avoid irrelevant inferences (focuses only on proving the goal).

Example Knowledge Base – Refer to the PPT

Forward vs. backward chaining

Forward Chaining Backward Chaining

Forward chaining starts from known Backward chaining starts from the goal and
facts and applies inference rules to works backward through inference rules to
extract more data unit it reaches to the find the required facts that support the goal.
goal.
It is a bottom-up approach It is a top-down approach
Forward chaining is known as data- Backward chaining is known as goal-driven
driven inference technique as we reach to technique as we start from the goal and
the goal using the available data. divide into sub-goal to extract the facts.
Forward chaining reasoning applies a Backward chaining reasoning applies a
breadth-first search strategy. depth-first search strategy.
Forward chaining tests for all the Backward chaining only tests for few
available rules required rules.
Forward chaining is suitable for the Backward chaining is suitable for
planning, monitoring, control, and diagnostic, prescription, and debugging
interpretation application. application.
Forward chaining can generate an Backward chaining generates a finite
infinite number of possible conclusions. number of possible conclusions.
It operates in the forward direction. It operates in the backward direction.
Forward chaining is aimed for any Backward chaining is only aimed for the
conclusion. required data.

Resolution in Predicate Logic

 With Unification, we now have an easy way of determining that two literals are
contradictory, if one of them can be unified with the negation of the other.
 So, for example, man(x) and ¬man(Spot) are contradictory, since man(x) and
man(Spot) can be unified, with substitution x/spot.
 In order to use resolution for expressions in the predicate logic, we use the unification
algorithm to locate pairs of literals that cancel out.
 For example, suppose we want to resolve two clauses: man(Marcus)
o The literal man(Marcus)can be unified with the literal ¬man(x1) with
the substitution Marcus/x1. we can now conclude only that
mortal(Marcus) must be true.
 So the resolvent generated by clauses 1 and 2 must be mortal (Marcus), which we get
by applying me result of the unification process to the resolvent. The resolution process
can then proceed from there to discover whether mortal (Marcus) leads to a
contradiction with other available clauses.

Algorithm: Resolution in predicate logic

1. Convert all the statements of F to clause form.

2. Negate P and convert the result to clause form. Add it to the-set of clauses obtained in 1.

3. Repeat until a contradiction is found, no progress can be made, or a predetermined amount


of effort has been expended.

a. Select two clauses. Call these the parent clauses.

b. Resolve them together. The resolvent will be the disjunction of all the literals of
both parent clauses with appropriate substitutions performed and with the following
exception: If there is one pair of literals TI and ¬T2 such that one of the parent clauses
contains T1 and the other contains ¬T2 and if T1 and T2 are unifiable, then neither T1
nor T2 should appear in the resolvent. We call T1 and T2 Complementary Literals.
Use the substitution produced by the unification to create the resolvent. If there is
more than one pair of complementary literals, only one pair should be omitted from
the resolvent. „

c. If the resolvent is the empty clause, then a contradiction has been found. If it is not,
then add it to the set of c1auses available to the procedure.

 There are systematic strategies for making the choice of clauses to resolve together
each step so that we will find a contradiction if one exists. This can speed up the
process considerably.
o Only resolve pairs of clauses that contain complementary literals, since only
such resolutions produce new clauses that are harder to satisfy than their
parents. To facilitate this, index clauses by the predicates they contain,
combined with an indication of whether the predicate is negated Then, given a
particular clause, possible resolvent that contain a complementary occurrence
of one of its predicates can be located directly.
o Eliminate certain clauses as soon as they are generated so that they cannot
participate in later resolutions. Two kinds of clauses should be eliminated:
 Tautologies (which can never be unsatisfied) and
 Clauses that are subsumed by other clauses. For example, P⋁Q can be
subsumed by P.

o Whenever possible, resolve either with one of- the clauses that is part of the
statement we are trying to refute or with a clause generated by a resolution
with such a clause. This is called the “set-of-support strategy”.
o Whenever possible, resolve with clauses that have a single literal. Such
resolutions generate new clauses with fewer literals than the larger of their
parent clauses and thus are probably closer to the goal of a resolvent with zero
terms. This method is called the “unit-preference-strategy”.

You might also like