Name of The Subject: Artificial Intelligence Subject Code: Cs8691 Regulation: 2017
Name of The Subject: Artificial Intelligence Subject Code: Cs8691 Regulation: 2017
Regulation : 2017
The crown is on King John’s head, so the “on head” relation contains just one tuple, _the
crown, King John_. The “brother” and “on head” relations are binary relations—that is,
they relate pairs of objects. Certain kinds of relationships are best considered as functions, in
that a given object must be related to exactly one object in this way. For example, each person
has one left leg, so the model has a unary “left leg” function that includes the following
mappings:
Term
A term is a logical expression that refers TERM to an object. Constant symbols are
therefore terms,
but it is not always convenient to have a distinct symbol to name every object. For example,
in English we might use the expression “King John’s left leg” rather than giving a name to
his leg. This is what function symbols are for: instead of using a constant symbol, we use
LeftLeg(John). The formal semantics of terms is straightforward. Consider a term f(t1, .
. . , tn). The function symbol f refers to some function in the model.
Atomic sentences
Atomic sentence (or atom for short) is formed from a predicate symbol optionally followed
by a parenthesized list of terms, such as Brother (Richard , John).
Atomic sentences can have complex terms as arguments. Thus,
Married(Father (Richard),Mother (John)) states that Richard the Lionheart’s father is
married to King John’s mother.
Complex Sentences
We can use logical connectives to construct more complex sentences, with the same syntax
and semantics as in propositional calculus
¬Brother (LeftLeg(Richard), John)
Brother (Richard , John) ∧ Brother (John,Richard)
King(Richard ) ∨ King(John)
¬King(Richard) ⇒ King(John) .
Quantifiers
Quantifiers are used to express properties of entire collections of objects, instead of
enumerating the objects by name. First-order logic contains two standard quantifiers, called
universal and existential
Nested quantifiers
We will often want to express more complex sentences using multiple quantifiers. The
simplest case is where the quantifiers are of the same type. For example, “Brothers are
siblings” can be written as
∀ x ∀ y Brother (x, y) ⇒ Sibling(x, y)
Consecutive quantifiers of the same type can be written as one quantifier with several
variables. For example, to say that siblinghood is a symmetric relationship, we can write
∀ x, y Sibling(x, y) ⇔ Sibling(y, x) .
In other cases we will have mixtures. “Everybody loves somebody” means that for every
person, there is someone that person loves:
∀ x ∃ y Loves(x, y) .
On the other hand, to say “There is someone who is loved by everyone,” we write
∃ y ∀ x Loves(x, y) .
The order of quantification is therefore very important. It becomes clearer if we insert
parentheses.
∀ x (∃ y Loves(x, y)) says that everyone has a particular property, namely, the property
that they love someone. On the other hand, ∃ y (∀ x Loves(x, y)) says that someone in
the world has a particular property, namely the property of being loved by everybody.
Equality
We can use the equality symbol to signify that two terms refer to the same object. For
example,
Father (John)=Henry
says that the object referred to by Father (John) and the object referred to by Henry are the
same.
The equality symbol can be used to state facts about a given function, as we just did for the
Father symbol. It can also be used with negation to insist that two terms are not the same
object. To say that Richard has at least two brothers, we would write
∃ x, y Brother (x,Richard ) ∧ Brother (y,Richard ) ∧¬(x=y) .
The wumpus agent receives a percept vector with five elements. The corresponding first-
order sentence stored in the knowledge base must include both the percept and the time at
which it occurred; otherwise, the agent will get confused about when it saw what. We use
integers for time steps. A typical percept sentence would be
Percept ([Stench, Breeze, Glitter , None, None], 5) .
Here, Percept is a binary predicate, and Stench and so on are constants placed in a list. The
actions in the wumpus world can be represented by logical terms:
Turn(Right ), Turn(Left ), Forward , Shoot , Grab, Climb .
To determine which is best, the agent program executes the query
ASKVARS(∃ a BestAction(a, 5)) ,
which returns a binding list such as {a/Grab}. The agent program can then return Grab as
the action to take. The raw percept data implies certain facts about the current state. For
example:
∀t,s,g,m,c Percept ([s,Breeze,g,m,c],t) ⇒ Breeze(t) ,
∀t,s,b,m,c Percept ([s,b,Glitter,m,c],t) ⇒ Glitter (t)
These rules exhibit a trivial form of the reasoning process called perception.
Simple “reflex” behavior can also be implemented by quantified implication sentences. For
example, we have
∀ t Glitter (t) ⇒ BestAction(Grab, t) .
Given the percept and rules from the preceding paragraphs, this would yield the desired
conclusion
BestAction(Grab, 5)—that is, Grab is the right thing to do.
For example, if the agent is at a square and perceives a breeze, then that square is breezy:
∀ s, t At(Agent, s, t) ∧ Breeze(t) ⇒ Breezy(s) .
It is useful to know that a square is breezy because we know that the pits cannot move about.
Notice that Breezy has no time argument.
Having discovered which places are breezy (or smelly) and, very important, not breezy (or
not smelly), the agent can deduce where the pits are (and where the wumpus is). first-order
logic just needs one axiom:
∀ s Breezy(s) ⇔ ∃r Adjacent (r, s) ∧ Pit(r) .
SUBSTITUTION
Let us begin with universal quantifiers
∀ x King(x) ∧ Greedy(x) ⇒ Evil(x) .
Then it seems quite permissible to infer any of the following sentences:
King(John) ∧ Greedy(John) ⇒ Evil(John)
King(Richard ) ∧ Greedy(Richard) ⇒ Evil(Richard)
King(Father (John)) ∧ Greedy(Father (John)) ⇒ Evil(Father (John)) .
The rule of Universal Instantiation (UI for short) says that we can infer any sentence
obtained by substituting a ground term (a term without variables) for the variable. Let
SUBST(θ,α) denote the result of applying the substitution θ to the sentence α. Then
the rule is written
∀v α SUBST({v/g}, α)
for any variable v and ground term g. For example, the three sentences given earlier are
obtained with the substitutions {x/John}, {x/Richard }, and {x/Father (John)}.
In the rule for Existential Instantiation, the variable is replaced by a single new constant
symbol. The formal statement is as follows: for any sentence α, variable v, and constant
symbol k that does not appear elsewhere in the knowledge base,
∃v α SUBST({v/k}, α)
For example, from the sentence
∃ x Crown(x) ∧ OnHead(x, John)
we can infer the sentence
Crown(C1) ∧ OnHead(C1, John)
EXAMPLE
Suppose our knowledge base contains just the sentences
∀x King(x) ∧ Greedy(x) ⇒ Evil(x)
King(John)
Greedy(John)
Brother (Richard, John)
Then we apply UI to the first sentence using all possible ground-term substitutions from the
vocabulary of the knowledge base—in this case, {x/John} and {x/Richard }. We obtain
King(John) ∧ Greedy(John) ⇒ Evil(John)
King(Richard ) ∧ Greedy(Richard) ⇒ Evil(Richard) ,
and we discard the universally quantified sentence. Now, the knowledge base is essentially
propositional if we view the ground atomic sentences
King(John),
Greedy(John), and so on—as proposition symbols.
UNIFICATION
Lifted inference rules require finding substitutions that make different logical expressions
look identical. This process is called unification and is a key component of all first-order
inference algorithms. The UNIFY algorithm takes two sentences and returns a unifier for
them if one exists:
UNIFY(p, q)=θ where SUBST(θ, p)= SUBST(θ, q) .
Suppose we have a query
AskVars(Knows(John, x)): whom does John know? Answers can be found by finding all
sentences in the knowledge base that unify with Knows(John, x). Here are the results of
unification with four different sentences that might be in the knowledge base:
UNIFY(Knows(John, x), Knows(John, Jane)) = {x/Jane}
UNIFY(Knows(John, x), Knows(y, Bill )) = {x/Bill, y/John}
UNIFY(Knows(John, x), Knows(y,Mother (y))) = {y/John, x/Mother (John)}
UNIFY(Knows(John, x), Knows(x, Elizabeth)) = fail .
The last unification fails because x cannot take on the values John and Elizabeth at
the same time. Now, remember that Knows(x, Elizabeth) means “Everyone knows
Elizabeth,” so we should be able to infer that John knows Elizabeth. The problem arises only
because the two sentences happen to use the same variable name, x. The problem can be
avoided by standardizing apart one of the two sentences being unified, which means
renaming its variables to avoid name clashes.
For example, we can rename x in Knows(x, Elizabeth) to x17 (a new variable name)
without changing its meaning. Now the unification will work:
UNIFY(Knows(John, x), Knows(x17, Elizabeth)) = {x/Elizabeth, x17/John}
UNIFY should return a substitution that makes the two arguments look the same. But
there could be more than one such unifier. For example, UNIFY(Knows(John, x),
Knows(y, z)) could return {y/John, x/z} or {y/John, x/John, z/John}. The first
unifier gives Knows(John, z) as the result of unification, whereas the second gives
Knows(John, John). The second result could be obtained from the first by an additional
substitution {z/John}; we say that the first unifier is more general than the second, because
it places fewer restrictions on the values of the variables.
An algorithm for computing most general unifiers is shown in Figure 9.1. The process
is simple: recursively explore the two expressions simultaneously “side by side,” building up
a unifier along the way, but failing if two corresponding points in the structures do not match.
There is one expensive step: when matching a variable against a complex term, one must
check whether the variable itself occurs inside the term; if it does, the match fails because no
consistent unifier can be constructed
FORWARD CHAINING
The following are first-order definite clauses:
King(x) ∧ Greedy(x) ⇒ Evil(x)
King(John)
Greedy(y)
Unlike propositional literals, first-order literals can include variables, in which case those
variables are assumed to be universally quantified. Consider the following problem:
The law says that it is a crime for an American to sell weapons to hostile nations. The
country Nono, an enemy of America, has some missiles, and all of its missiles were sold to
it by ColonelWest, who is American.
We will prove thatWest is a criminal. First, we will represent these facts as first-order definite
clauses. The next section shows how the forward-chaining algorithm solves the problem.
“. . . it is a crime for an American to sell weapons to hostile nations”:
American(x) ∧Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) ⇒ Criminal (x) - - -(9.3)
“Nono . . . has some missiles.” The sentence ∃ x Owns(Nono, x)∧Missile(x) is
transformed
into two definite clauses by Existential Instantiation, introducing a new constant M1:
Owns(Nono,M1) -------------------------------------------------------------------------------- (9.4)
Missile(M1) - - - - - - (9.5)
“All of its missiles were sold to it by Colonel West”:
Missile(x) ∧ Owns(Nono, x) ⇒ Sells(West, x, Nono) - - - - - -(9.6)
This knowledge base contains no function symbols and is therefore an instance of the class
of Datalog knowledge bases. Datalog DATALOG is a language that is restricted to first-order
definite clauses with no function symbols. Datalog gets its name because it can represent the
type of statements typically made in relational databases.
BACKWARD CHAINING
The second major family of logical inference algorithms uses the backward chaining
approach for definite clauses. These algorithms work backward from the goal, chaining
through rules to find known facts that Support the proof.
A backward-chaining algorithm
Figure 9.6 shows a backward-chaining algorithm for definite clauses. FOL-BC-
ASK(KB, goal ) will be proved if the knowledge base contains a clause of the form lhs ⇒
goal, where lhs (left-hand side) is a list of conjuncts. An atomic fact like American(West)
is considered as a clause whose lhs is the empty list. Now a query that contains variables
might be proved in multiple ways. For example, the query Person(x) could be proved with
the substitution {x/John} as well as with {x/Richard }. So we implement FOL-BC-ASK
as a generator— a function that returns multiple times, each time giving one possible result.
Backward chaining is a kind of AND/OR search—the OR part because the goal query can
be proved by any rule in the knowledge base, and the AND part because all the conjuncts in
the lhs of a clause must be proved. FOL-BC-OR works by fetching all clauses that might
unify with the goal, standardizing the variables in the clause to be brand-new variables, and
then, if the rhs of the clause does indeed unify with the goal, proving every conjunct in the
lhs, using FOL-BC-AND. That function in turn works by proving each of the conjuncts in
turn, keeping track of the accumulated substitution as we go. Figure 9.7 is the proof tree for
deriving Criminal (West) from sentences (9.3) through (9.10).
Backward chaining, as we have written it, is clearly a depth-first search algorithm. This
means that its space requirements are linear in the size of the proof (neglecting, for now, the
space required to accumulate the solutions). It also means that backward chaining (unlike
forward chaining) suffers from problems with repeated states and incompleteness.
RESOLUTION
We describe how to extend resolution to first-order logic.
Conjunctive normal form for first-order logic
As in the propositional case, first-order resolution requires that sentences be in conjunctive
normal form (CNF)—that is, a conjunction of clauses, where each clause is a disjunction of
literals.6 Literals can contain variables, which are assumed to be universally quantified. For
example, the sentence
∀ x American(x) ∧Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) ⇒ Criminal (x)
becomes, in CNF,
¬American(x) ∨ ¬Weapon(y) ∨ ¬Sells(x, y, z) ∨ ¬Hostile(z) ∨ Criminal (x)
Every sentence of first-order logic can be converted into an inferentially equivalent CNF
sentence.
We illustrate the procedure by translating the sentence “Everyone who loves all animals is
loved by someone,” or
∀ x [∀ y Animal(y) ⇒ Loves(x, y)] ⇒ [∃ y Loves(y, x)]
The steps are as follows:
• Eliminate implications:
∀ x [¬∀ y ¬Animal(y) ∨ Loves(x, y)] ∨ [∃ y Loves(y, x)]
• Move ¬ inwards: In addition to the usual rules for negated connectives, we need rules
for negated quantifiers. Thus, we have
¬∀x p becomes ∃ x ¬p
¬∃x p becomes ∀ x ¬p .
Our sentence goes through the following transformations:
∀ x [∃ y ¬(¬Animal(y) ∨ Loves(x, y))] ∨ [∃ y Loves(y, x)]
∀ x [∃ y ¬¬Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ y Loves(y, x)]
∀ x [∃ y Animal (y) ∧¬Loves(x, y)] ∨ [∃ y Loves(y, x)]
Notice how a universal quantifier (∀ y) in the premise of the implication has become an
existential quantifier. The sentence now reads “Either there is some animal that x doesn’t
love, or (if this is not the case) someone loves x.” Clearly, the meaning of the original
sentence has been preserved.
• Standardize variables: For sentences like (∃xP(x))∨(∃xQ(x)) which use the same
variable name twice, change the name of one of the variables. This avoids confusion later
when we drop the quantifiers. Thus, we have
∀ x [∃ y Animal (y) ∧¬Loves(x, y)] ∨ [∃ z Loves(z, x)]
• Skolemize: Skolemization is the process of removing existential quantifiers by
elimination.
In the simple case, it is just like the Existential Instantiation rule of Section 9.1: translate ∃
x P(x) into P(A), where A is a new constant. However, we can’t apply Existential Instantiation
to our sentence above because it doesn’t match the pattern ∃v α; only parts of the sentence
match the pattern. If we blindly apply the rule to the two matching parts we get
∀ x [Animal (A) ∧ ¬Loves(x,A)] ∨ Loves(B, x)
which has the wrong meaning entirely: it says that everyone either fails to love a particular
animal A or is loved by some particular entity B. In fact, our original sentence allows each
person to fail to love a different animal or to be loved by a different person. Thus, we want
the Skolem entities to depend on x and z:
∀ x [Animal (F(x)) ∧¬Loves(x, F(x))] ∨ Loves(G(z), x)
Here F and G are Skolem functions. The general rule is that the arguments of the Skolem
function are all the universally quantified variables in whose scope the existential quantifier
appears. As with Existential Instantiation, the Skolemized sentence is satisfiable exactly
when the original sentence is satisfiable.
• Drop universal quantifiers: At this point, all remaining variables must be universally
quantified. Moreover, the sentence is equivalent to one in which all the universal quantifiers
have been moved to the left. We can therefore drop the universal quantifiers:
[Animal (F(x)) ∧ ¬Loves(x, F(x))] ∨ Loves(G(z), x)
• Distribute ∨ over ∧:
[Animal (F(x)) ∨ Loves(G(z), x)] ∧ [¬Loves(x, F(x)) ∨ Loves(G(z), x)]
This step may also require flattening out nested conjunctions and disjunctions.
Propositional literals are complementary if one is the negation of the other; first-order literals
are complementary if one unifies with the negation of the other. Thus, we have
This rule is called the binary resolution rule because it resolves exactly two literals.
EXAMPLE 1
Consider the crime example. The sentences in CNF are
¬American(x) ∨ ¬Weapon(y) ∨ ¬Sells(x, y, z) ∨ ¬Hostile(z) ∨ Criminal (x)
¬Missile(x) ∨ ¬Owns(Nono, x) ∨ Sells(West, x, Nono)
¬Enemy(x,America) ∨ Hostile(x)
¬Missile(x) ∨Weapon(x)
Owns(Nono,M1) Missile(M1)
American(West) Enemy(Nono,America)
We also include the negated goal ¬Criminal (West). The resolution proof is shown in
Figure 9.11.
Notice the structure: single “spine” beginning with the goal clause, resolving against clauses
from the knowledge base until the empty clause is generated. This is characteristic of
resolution on Horn clause knowledge bases. In fact, the clauses along the main spine
correspond exactly to the consecutive values of the goals variable in the backward-chaining
algorithm of Figure 9.6. This is because we always choose to resolve with a clause whose
positive literal unified with the leftmost literal of the “current” clause on the spine; this is
exactly what happens in backward chaining. Thus, backward chaining is just a special case
of resolution with a particular control strategy to decide which resolution to perform next.
EXAMPLE 2
Our second example makes use of Skolemization and involves clauses that are not definite
clauses. This results in a somewhat more complex proof structure. In English, the problem
is as follows:
Everyone who loves all animals is loved by someone.
Anyone who kills an animal is loved by no one.
Jack loves all animals.
Either Jack or Curiosity killed the cat, who is named Tuna.
Did Curiosity kill the cat?
The resolution proof that Curiosity killed the cat is given in Figure 9.12. In English, the proof
could be paraphrased as follows:
Suppose Curiosity did not kill Tuna. We know that either Jack or Curiosity did; thus Jack
must have. Now, Tuna is a cat and cats are animals, so Tuna is an animal. Because anyone
who kills an animal is loved by no one, we know that no one loves Jack. On the other hand,
Jack loves all animals, so someone loves him; so we have a contradiction. Therefore,
Curiosity killed the cat.
The proof answers the question “Did Curiosity kill the cat?” but often we want to pose more
general questions, such as “Who killed the cat?” Resolution can do this, but it takes a little
more work to obtain the answer. The goal is ∃w Kills(w, Tuna), which, when negated,
becomes ¬Kills(w, Tuna) in CNF. Repeating the proof in Figure 9.12 with the new negated
goal, we obtain a similar proof tree, but with the substitution {w/Curiosity} in one of the
steps. So, in this case, finding out who killed the cat is just a matter of keeping track of the
bindings for the query variables in the proof.
EXAMPLE 3
1. All people who are graduating are happy.
2. All happy people smile.
3. Someone is graduating.
4. Conclusion: Is someone smiling?
Solution:
Convert the sentences into predicate Logic
1. ∀x graduating(x) -> happy(x)
2. ∀x happy(x) -> smile(x)
3. ∃x graduating(x)
4. ∃x smile(x)
The general framework of concepts is called an upper ontology because of the convention of drawing
graphs with the general concepts at the top and the more specific concepts below them, as in Figure
12.1.
Categories can also be defined by providing necessary and sufficient conditions for membership. For
example, a bachelor is an unmarried adult male:
x∈ Bachelors ⇔ Unmarried(x) ∧ x∈ Adults ∧ x∈Males
Physical Composition
We use the general PartOf relation to say that one thing is part of another. Objects can be grouped
into part of hierarchies, reminiscent of the Subset hierarchy:
PartOf (Bucharest , Romania)
PartOf (Romania, EasternEurope)
PartOf (EasternEurope, Europe)
PartOf (Europe, Earth)
The PartOf relation is transitive and reflexive; that is,
PartOf (x, y) ∧ PartOf (y, z) ⇒ PartOf (x, z)
PartOf (x, x)
Therefore, we can conclude PartOf (Bucharest , Earth).
For example, if the apples are Apple1, Apple2, and Apple3, then
BunchOf ({Apple1,Apple2,Apple3})
denotes the composite object with the three apples as parts (not elements).
We can define BunchOf in terms of the PartOf relation. Obviously, each element of s is part of
BunchOf (s):
∀x x∈ s ⇒ PartOf (x, BunchOf (s))
Furthermore, BunchOf (s) is the smallest object satisfying this condition. In other words, BunchOf
(s) must be part of any object that has all the elements of s as parts:
∀ y [∀x x∈ s ⇒ PartOf (x, y)] ⇒ PartOf (BunchOf (s), y)
Measurements
In both scientific and commonsense theories of the world, objects have height, mass, cost, and so on.
The values that we assign for these properties are called measures.
Length(L1)=Inches(1.5)=Centimeters(3.81)
Conversion between units is done by equating multiples of one unit to another:
Centimeters(2.54 ×d)=Inches(d)
Similar axioms can be written for pounds and kilograms, seconds and days, and dollars and cents.
Measures can be used to describe objects as follows:
Diameter (Basketball12)=Inches(9.5)
ListPrice(Basketball12)=$(19)
d∈ Days ⇒ Duration(d)=Hours(24)
Time Intervals
Event calculus opens us up to the possibility of talking about time, and time intervals. We will
consider two kinds of time intervals: moments and extended intervals. The distinction is that only
moments have zero duration:
Partition({Moments,ExtendedIntervals}, Intervals )
i∈Moments ⇔ Duration(i)=Seconds(0)
The functions Begin and End pick out the earliest and latest moments in an interval, and the function
Time delivers the point on the time scale for a moment. The function Duration gives the difference
between the end time and the start time.
Interval (i) ⇒ Duration(i)=(Time(End(i)) − Time(Begin(i)))
Time(Begin(AD1900))=Seconds(0)
Time(Begin(AD2001))=Seconds(3187324800)
Time(End(AD2001))=Seconds(3218860800)
Duration(AD2001)=Seconds(31536000)
Two intervals Meet if the end time of the first equals the start time of the second. The complete set
of interval relations, as proposed by Allen (1983), is shown graphically in Figure 12.2 and logically
below:
Meet(i,j) ⇔ End(i)=Begin(j)
Before(i,j) ⇔ End(i) < Begin(j)
After (j,i) ⇔ Before(i, j)
During(i,j) ⇔ Begin(j) < Begin(i) < End(i) < End(j)
Overlap(i,j) ⇔ Begin(i) < Begin(j) < End(i) < End(j)
Begins(i,j) ⇔ Begin(i) = Begin(j)
Finishes(i,j) ⇔ End(i) = End(j)
Equals(i,j) ⇔ Begin(i) = Begin(j) ∧ End(i) = End(j)
EVENTS
Event calculus reifies fluents and events. The fluent At(Shankar , Berkeley) is an object that refers
to the fact of Shankar being in Berkeley, but does not by itself say anything about whether it is true.
To assert that a fluent is actually true at some point in time we use the predicate T, as in T(At(Shankar
, Berkeley), t).
Events are described as instances of event categories. The event E1 of Shankar flying from San
Francisco to Washington, D.C. is described as
E1 ∈ Flyings ∧ Flyer (E1, Shankar ) ∧ Origin(E1, SF) ∧ Destination(E1,DC)
we can define an alternative three-argument version of the category of flying events and say
E1 ∈ Flyings(Shankar , SF,DC)
We then use Happens(E1, i) to say that the event E1 took place over the time interval i, and we
say the same thing in functional form with Extent(E1)=i. We represent time intervals by a (start, end)
pair of times; that is, i = (t1, t2) is the time interval that starts at t1 and ends at t2. The complete
set of predicates for one version of the event calculus is
T(f, t) Fluent f is true at time t
Happens(e, i) Event e happens over the time interval i
Initiates(e, f, t) Event e causes fluent f to start to hold at time t
Terminates(e, f, t) Event e causes fluent f to cease to hold at time t
Clipped(f, i) Fluent f ceases to be true at some point during time interval i
Restored (f, i) Fluent f becomes true sometime during time interval i
We assume a distinguished event, Start , that describes the initial state by saying which fluents are
initiated or terminated at the start time. We define T by saying that a fluent holds at a point in time if
the fluent was initiated by an event at some time in the past and was not made false (clipped) by an
intervening event. A fluent does not hold if it was terminated by an event and not made true (restored)
by another event. Formally, the axioms are:
Happens(e, (t1, t2)) ∧ Initiates(e, f, t1) ∧ ¬Clipped(f, (t1, t)) ∧ t1 < t ⇒T(f, t)
Happens(e, (t1, t2)) ∧ Terminates(e, f, t1)∧ ¬Restored (f, (t1, t)) ∧ t1 < t ⇒¬T(f, t)
where Clipped and Restored are defined by
Clipped(f, (t1, t2)) ⇔ ∃ e, t, t3 Happens(e, (t, t3)) ∧ t1 ≤ t < t2 ∧ Terminates(e,
f, t)
Restored (f, (t1, t2)) ⇔ ∃ e, t, t3 Happens(e, (t, t3)) ∧ t1 ≤ t < t2 ∧ Initiates(e,
f, t)
Inheritance becomes complicated when an object can belong to more than one category or when a
category can be a subset of more than one other category; this is called multiple inheritance.
The drawback of semantic network notation, compared to first-order logic: the fact that links between
bubbles represent only binary relations. For example, the sentence Fly(Shankar , NewYork,
NewDelhi ,Yesterday) cannot be asserted directly in a semantic network. Nonetheless, we can
obtain the effect of n-ary assertions by reifying the proposition itself as an event belonging to an
appropriate event category. Figure 12.6 shows the semantic network structure for this particular event.
Notice that the restriction to binary relations forces the creation of a rich ontology of reified concepts.
One of the most important aspects of semantic networks is their ability to represent default values
for categories. Examining Figure 12.5 carefully, one notices that John has one leg, despite the fact
that he is a person and all persons have two legs. In a strictly logical KB, this would be a contradiction,
but in a semantic network, the assertion that all persons have two legs has only default status; that is,
a person is assumed to have two legs unless this is contradicted by more specific information.