UNIT III
\dvanced Knowledge Representation and Reasoning: Knowledge Representation Issues
‘onmonotonic Reasoning, Other Knowledge Representation Schemes
-asoning Under Uncertainty: Basic probability, Acting Under Uncertainty, Bayes’ Rule, Representin,
‘nowledge in an Uncertain Domain, Bayesian Networks
What is Knowledge?
Knowledge is an useful term to judge the understanding of an individual on a
given subject.
In intelligent systems, domain is the main focused subject area. So, the system
specifically focuses on acquiring the domain knowledge.
What is knowledge representation?
Humans are best at understanding, reasoning, and interpreting knowledge.
Human knows things, which is knowledge and as per their knowledge they
perform various actions in the real world. But how machines do all these
things comes under knowledge representation and reasoning
‘What to Represent:
Following are the kind of knowledge which needs to be represented in Al systems:
Object: All the facts about objects in our world domain. E.g., Guitars contains
strings, trumpets are brass instruments.
Events: Events are the actions which occur in our world.
Performance: It describes behavior which involves knowledge about how to do
things.
Meta-knowledge: It is knowledge about what we know.
Facts: Facts are the truths about the real world and what we represent.
Knowledge-Base: The central component of the knowledge-based agents is the
knowledge base. It is represented as KB. The Knowledgebase is a group of the
Sentences (Here, sentences are used as a technical term and not identical with
the English language).
Reasoning in Artificial intelligence
‘The reasoning is the mental process of deriving logical conclusion and making
predictions from available knowledge, facts, and beliefs. Or we can say,
"Reasoning is a way to infer facts from existing data." It is a general process of
thinking rationally, to find valid conclusions.
In artificial intelligence, the reasoning is essential so that the machine can also
think rationally as a human brain, and can perform like a human.Types of Reasoning
In artificial intelligence, reasoning can be divided into the following categories:
Deductive reasoning
Inductive reasoning
Abductive reasoning
Common Sense Reasoning
Monotonic Reasoning
Non-monotonic Reasoning
1, Deductive reasoning:
It is the form of valid reasoning, which means the argument's conclusion must
be true when the premises are true.
Deductive reasoning is a type of propositional logic in Al, and it requires various
rules and facts. It is sometimes referred to as top-down reasoning, and
contradictory to inductive reasoning.In deductive reasoning, the truth of the
premises guarantees the truth of the conclusion.
Example:
Premise-1: All the human eats veggies
Premise-2: Sudha is human.
Conclusion: Sudha eats veggies.
The general process of deductive reasoning is given below:
Hypothesis Be Patterns pera)
2. Inductive Reasoning:
It starts with the series of specific facts or data and reaches to a general
statement or conclusion.
Inductive reasoning is a type of propositional logic, which is also known as
bottom-up reasoning.
In inductive reasoning, we use historical data for which premises support the
conclusion.
In inductive reasoning, premises provide probable supports to the conclusion, so
the truth of premises does not guarantee the truth of the conclusion.Example:
Premise: All of the pigeons we have seen in the zoo are white.
Conclusion: Therefore, we can expect all the pigeons to be white.
SS © & &
3. Abductive reasoning:
Abductive reasoning is a form of logical reasoning which starts with single or
multiple observations then seeks to find the most likely conclusion for the
observation
Abductive reasoning is an extension of deductive reasoning, but in abductive
reasoning, the premises do not guarantee the conclusion,
Example:
Implication: Cricket ground is wet if it is raining
Axiom: Cricket ground is wet.
Conelusion:It is raining.
4. Common Sense Reasoning
Common sense reasoning is an informal form of reasoning, which can be gained
through experiences. Common Sense reasoning simulates the human ability to
make presumptions about events which oceurs on every day.
It relies on good judgment rather than exact logic and operates on heuristic
knowledge and heuristic rules.
Example:
1, One person can be at one place at a time.
2. If I put my hand in a fire, then it will burn.
5. Monotonic Reasoning:
In monotonic reasoning, once the conclusion is taken, then it will remain the
same even if we add some other information to existing information in our
knowledge base.
To solve monotonic problems, we can derive the valid conclusion from the
available facts only, and it will not be affected by new facts.
Monotonic reasoning is not useful for the real-time systems, as in real time,
facts get changed, so we cannot use monotonic reasoning.
Any theorem proving is an example of monotonic reasoning.Example: Earth revolves around the Sun.
It is a true fact, and it cannot be changed even if we add another sentence in
knowledge base like, "The moon revolves around the earth” Or "Earth is not
round," ete.
Advantages of Monotonic Reasonin;
© In monotonic reasoning, each old proof will always remain valid.
Disadvantages of Monotonic Reasoning:
© We cannot represent the real world scenarios using Monotonic reasoning.
© Since we can only derive conclusions from the old proofs, so new knowledge
from the real world cannot be added.
6. Non-monotonic Reasoning
In Non-monotonic reasoning, some conclusions may be invalidated if we add
some more information or knowledge to our knowledge base.
Non-monotonic reasoning deals with incomplete.
"Human perceptions for various things in daily life,”is a general example of
non-monotonic reasoning.
Example: Let suppose the knowledge base contains the following knowledge:
© Birds can fly
© Penguins cannot fly
© Pitty is a bird
So from the above sentences, we can conclude that Pitty can fly.
However, if we add one another sentence into knowledge base "Pitty is a
penguin", which concludes "Pitty cannot fly’, so it invalidates the above
conclusion
Advantages of Non-monotonic reasoning:
© For real-world systems such as Robot navigation, we can use non-monotonic
reasoning.
en Non-monotonic reasoning, we can choose probabilistic facts or can make
assumptions.
Disadvantages of Non-monotonic Reasoning:
© In non-monotonic reasoning, the old facts may be invalidated by adding new
sentences.
It cannot be used for theorem proving.Difference between Inductive and Deductive reasoning
a
Pecans
eet]
Deductive Reasoning
Deductive reasoning is the
form of valid reasoning, to
deduce new information or
conclusion from known
related facts and information.
It follows a
approach.
top-down
It starts from Premises,
In _ deductive reasoning
conclusion must be true if the
premises are true.
‘Theory—> hypothesis
patterns—confirmation.
In deductive __ reasoning,
arguments may be valid or
invalid.
Deductive reasoning reaches
from general facts to specific
facts.
ieee
Inductive Reasoning
Inductive reasoning arrives at a
conclusion by the process of
generalization using specific
facts or data.
It follows a bottom-up approach.
It starts from the Conclusion.
In inductive reasoning, the truth
of premises does not guarantee
the truth of conclusions.
Observations-
—spatterns—hypothesisTheory.
In inductive
arguments may be
strong,
reasoning,
weak or
Inductive reasoning reaches
from specific facts to general
facts.Issues in knowledge representation
The main objective of knowledge representation is to draw the conclusions from the
knowledge, but there are many issues associated with the use of knowledge
representation techniques.
The fundamental goal of knowledge Representation is to facilitate inference
(conchusions) from knowledge.
Some of them are listed below:
(ca
Z| Average
Ashwin |, Chennai
Fig: Inheratable Knowledge Representation
Refer to the above diagram to refer to the following issues.
1, Important attributes
Any attribute of objects so basic that they occur in almost every problem
domain?
There are two attributes shown in the diagram, instance and isa. Since these
attributes support property of inheritance, they are of prime importance.
2. Relationship among attributes
Any important relationship that exists among object attributed?
The relationship between the attributes of an object, independent of specific
Knowledge they encode, may hold properties like:Inverse — This is about consistency check, while a value is added to one
attribute. The entities are related to each other in many different ways.
Existence in an isa hierarchy — This is about generalization-specification,
like, classes of objects and specialized subsets of those classes, there are
attributes and specialization of attributes. For example, the attribute height is a
specialization of general attribute physical-size which is, in turn, a
specialization of physical-attribute. These generalization-specialization
relationships are important for attributes because they support inheritance.
Technique for reasoning about values — This is about reasoning values of
attributes not given explicitly. Several kinds of information are used in
reasoning, like, height: must be in a unit of length, Age: of a person cannot be
greater than the age of person’s parents. The values are often specified when a
knowledge base is created.
Single valued attributes — This is about a specific attribute that is guaranteed
to take a unique value. For example, a baseball player can at time have only a
single height and be a member of only one team. KR systems take different
approaches to provide support for single valued attributes,
Basically, the attributes used to describe objects are nothing but the entities.
However, the attributes of an object do not depend on the encoded specific
knowledge.
3. Choosing the granularity of representation
At what level of detail should the knowledge be represented?
While deciding the granularity of representation, it is necessary to know the
following:
i, What are the primitives and at what level should the knowledge be represented?
ii, What should be the number (small or large) of low-level primitives or high-level
facts?
High-level facts may be insufficient to draw the conclusion while Low-level
primitives may require a lot of storage.
For example: Suppose that we are interested in following facts:
Now, this could be represented as [Spotted [azent (USL) Object (AIR
Such a representation can make it easy to answer questions such as: Who spotted
Alex?
Suppose we want to know:"Did John see Sue?”
Given only one fact, user cannot discover that answer.
Hence, the user can add other facts, such as "Spotted (x, y) — saw (x, y)"4. Representing sets of objects.
How should sets of objects be represented?
There are some properties of objects which satisfy the condition of a set together
but not as individual;
Example: Consider the assertion made in the sentences:
“There are more sheep than people in Australia’, and "English speakers can be
found all over the world.”
‘These facts can be described by including an assertion to the sets representing
people, sheep, and English.
5. Finding the right structure as needed
Given a large amount of knowledge stored in a database, how can relevant
parts are accessed when they are needed?
To describe a particular situation, it is always important to find the access of right
structure. This can be done by selecting an initial structure and then revising the
choice.
While selecting and reversing the right structure, it is necessary to solve following
problem statements.
While doing so, it is necessary to solve following problems:
+ How to perform an initial selection of the most appropriate structure
+ How to fill in appropriate details from the current situations.
+ How to find a better structure if the one chosen initially turns out not to be
appropriate.
+ What to do if none of the available structures is appropriate.
+ When to create and remember a new structure.
‘They include the process on how to:
+ Select an initial appropriate structure.
+ Fill the necessary details from the current situations.
+ Determine a better structure if the initially selected structure is not appropriate
to fulfill other conditions.
+ Find the solution if none of the available structures is appropriate.
+ Create and remember a new structure for the given condition.
+ There is no specific way to solve these problems, but some of the effective
knowledge representation techniques have the potential to solve them.Introduction to Nonmonotonic Reasoning
The definite clause logic is monotonic in the sense that anything that could be
concluded beforea clause is added can still be concluded after it is added; adding
knowledge does not reduce theset of propositions that can be derived.
Logic is non-monotonic if some conclusions can be invalidated by adding
more knowledge.
The logic of definite clauses with negation as failure is non-monotonic. Non-
monotonicreasoning is useful for representing defaults. A default is a rule that can
be used unless itoverridden by an exception.
For example, to say that b is normally true if c is true, a knowledge base designer
can write a rule of the form
bc Aaba.
where aba is an atom that means abnormal with respect to some aspect a. Given
c, the agent caninfer bunless it is told aba. Adding aba to the knowledge base can
prevent the conclusion of b.
Rules that imply abacan be used to prevent the default under the conditions of the
body of therule.
Example: Suppose the purchasing agent is investigating purchasing holidays. A
resort maybe adjacent to a beach or away from a beach. This is not symmetric; if
the resort was adjacent toa beach, the knowledge provider would specify this.
Thus, it is reasonable to have the clauseaway_from_beach — ~ on_beach.
This clause enables an agent to infer that a resort is away from the beach if the
agent is not told itis adjacent to a beach.A cooperative system tries to not mislead.
If we are told the resort is on the beach, we wouldexpect that resort users would
have access to the beach. If they have access to a beach, we wouldexpect them to
be able to swim at the beach. Thus, we would expect the following defaults:
beach_access «-on_beach A ~ abbeach_access.
swim_at_beach «beach access A ~ abswim_at_beach.
A cooperative system would tell us if a resort on the beach has no beach access or
if there is noswimming, We could also specify that, if there is an enclosed bay and
a big city, then there is noswimming, by default:
abswim_at_beach <-enclosed_bay Abig_city A ~ abno_swimming_near_city.
We could say that British Columbia is abnormal with respect to swimming near
cities:
abno_swimming_near_city —in_BC A ~ abBC_beaches.Given only the preceding rules, an agent infers away_from_beach. If it is then told
on_beach, itcan no longer infer away_from_beach, but it can now infer
beach_access and swim_at_beach. Ifit is also told enclosed_bay and big city, it can
no longer infer swim_at_beach. However, if it isthen told in_BC, it can then infer
swim_at_beach.
By having defaults of what is normal, a user can interact with the system by
telling it what isabnormal, which allows for economy in communication. The user
does not have to state theobvious.
One way to think about non-monotonic reasoning is in terms of arguments. The
rules can beused as components of arguments, in which the negated abnormality
gives a way to underminearguments.
Note that, in the language presented, only positive arguments exist that can
beundermined. In more general theories, there can be positive and negative
arguments that attackeach other.
Knowledge Representation Schemes
A number of knowledge representation schemes (or formalisms) have
been used to represent the knowledge of humans in a systematic
manner. This knowledge is represented in a knowledge base such that
it can be retrieved for solving problems. Amongstthe well-established
knowledge representation schemes are:
Production Rules
Semantic Networks
Frames
Conceptual Dependency Grammar
Conceptual Graphs
Ontology
Predicate and Modal Logie
Conceptual or Terminological Logics
XML / RDF
Knowledge Representation Schemes
There are four types of Knowledge representation:
Relational
Inheritable
Inferential
Declarative/ Procedural.Jo Relational Knowledg.
litprovides a framework to compare two objects based on equivalent attributes. Any
instance in which two different objects are compared is a relational type of knowledge.
Ithis knowledge associates elements of one domain with another domain.
Relational knowledge is made up of objects consisting of attributes and their
lcorresponding associated values.
[The results of this knowledge type is a mapping of elements among different domains.
he table below shows a simple way to store facts.
‘The facts about a set of objects are put systematically in columns. -
‘This representation provides little opportunity for inference.
Table - Simple Relational Knowledge
iven the facts it is not possible to answer simple question such as:
Who is the heaviest player ?".
but if a procedure for finding heaviest player is provided, then these facts will enable
{that procedure to compute an answer.
Ht We can ask things like who "bats ~ left" and "throws ~ right’
Jo Inheritable Knowledge
it is obtained from associated objects. It prescribes a structure in which new objects are
created which may inherit attributes from their parents or a subsct of attributes
from existing objects.
HHere the knowledge elements inherit attributes from their parents, but in many cases
Inot all attributes of the parent elements be prescribed to the child elements.
he knowledge embodied in the design hierarchies found in the functional, physical
Jand process domains.
|+ Within the hierarchy, elements inherit attributes from their parents, but in many
cases, notall attributes of the parent elements prescribed to the child elements.IThe inheritance is a powerful form of inference, but not adequate. The basic KR needs
to be augmented with inference mechanism,
The KR in hierarchical structure, shown below, is called “semantic network” .
Property inheritance: The objects or elements of specific classes inherit attributes and
values from more general classes. The classes are organized in a generalized hierahy.
Baseball knowledge
~ isa : show class inclusion
- instance : show class membership isa
Person
Baseball
Player
‘Chicago ors Drs
on Brown Dodger
Fig. Inheritable knowledge representation (KR)
The directed arrows represent attributes (isa, instance, team) originates at object being
described and terminates at object or its value.
Ihe box nodes represents objects and values of the attributes.
Jo Viewing a node as a frame
Example : Baseball-player
isa : Adult-Male Bates : EQUAL handed
Height :6.1
Batting-average : 0.252
this algorithm is simple. It describes the basic mechanism of inheritance. It does not
Jsay what to do if there is more than one value of the instance or “isa” attribute.
his can be applied to the example of knowledge base illustrated, in the previous slide,
to derive answers to the following queries :
team (Pee-Wee-Reese) = Brooklyn-Dodger
batting-average(Three-Finger-Brown) = 0.106 ~ height (Pee-Wee-Reese] = 6.1
bats (Three Finger Brown) = right‘The steps to retrieve a value for an attribute of an instance object:
1, Find the object in the knowledge base
2. If there is a value for the attribute report it
3. Otherwise look for a value of an instance, if none fail
4. Also, Go to that node and find a value for the attribute and then report it
5. Otherwise, search through using is until a value is found for the attribute.
Jo Inferential Knowledge
it is inferred from objects through relations among objects.
le.g., a word alone is a simple syntax, but with the help of other words in phrase the
reader may infer more from a word; this inference within linguistic is called semantics.
his knowledge generates new information from the given information.
|- This new information does not require further data gathering form source but
doesrequire analysis of the given information to generate new knowledge.
Example: given a set of relations and values, one may infer other values / relations.
lApredicate logic (a mathematical deduction) used to infer from a set of attributes.
Moreover, Inference through predicate logic uses a set of logical operations to
retateindividual data.
+ Represent knowledge as formal logic: All dogs have tails vx: dog(x) -» hastail(x)
Advantages:
A set of strict rules.
Can use to derive more facts.
Also, Truths of new statements can be verified
Guaranteed correctness.
Iso, Many inference procedures available to implement standard rules of logic popular
jinAl systems. e.g Automated theorem proving.
Example :
Given a set of relations and values, one may infer other values or relations.A predicate
logic (a mathematical deduction) is used to infer from a set of attributes.
*. "(implication), “." (not), "Vv" ( "A" (and),
"Y "(for all), "3 "(there exists).
Examples of predicate logic statements
1. "Wonder" is a name of a dog :
2. All dogs belong to the class of animals : V x : dog (x) —> animal(x)
3. All animals either live on land or in wat:
V x: animal(x) — live (x,land) V live (x, water)From these three statements we can infer that:
" Wonder lives either on land or on water.”
Note : If more information is made available about these objects and their relations,
then more knowledge can be inferred.
Jo Declarative Knowledge
IA statement in which knowledge is specified, but the use to which that knowledge is to
put is not given,
lc.g. laws, people's name; these are facts which can stand alone, not dependent on
lother knowledge;
Here, the knowledge is based on declarative facts about axioms and domains.
|- axioms are assumed to be true unless a counter example is found to invalidate them.
|-domains represent the physical world.
laxiom and domains thus simply exists and serve as declarative statements that can
stand alone.
}) Procedural Knowledge
lA representation in which the control information, to use the knowledge, embedded in
{the knowledge itself. For example, computer programs, directions, and recipes; these
indicate specific use or implementation;
Here, the knowledge is a mapping process between domains that specify
“what to do when” and the representation is of “how to make it” rather than “what
it is”.
‘Moreover, Knowledge encoded in some procedures, small programs that know how
ito do
specific things, how to proceed.
Advantage:
Heuristic or domain-specific knowledge can represent.
‘Moreover, Extended logical inferences, such as default reasoning facilitated.
Also, Side effects of actions may model. Some rules may become false in time.
Disadvantages:
‘Completeness — not all cases may represent.
Consistency — not all deductions may be correct.
c.g If we know that Fred is a bird we might deduce that Fred can fly. Later we might
discover that Fred is an emu.
Modularity sacrificed. Changes in knowledge base might have far-reaching effects.
Example: A parser in a natural language has the knowledge that a noun phrase may
contain articles, adjectives and nouns. It thus accordingly call routines that know how
Ito process articles, adjectives and nouns.Probabilistic reasoning:
Probabilistic reasoning is a way of knowledge representation where we
apply the concept of probability to indicate the uncertainty in knowledge.
In probabilistic reasoning, we combine probability theory with logic to
handle the uncertainty.
We use probability in probabilistic reasoning because it provides a way to
handle the uncertainty that is the result of someone's laziness and
ignorance.
In the real world, there are lots of scenarios, where the certainty of
something is not confirmed, such as “It will rain today,” "behavior of
someone for some situations," "A match between two teams or
twoplayers.” These are probable sentences for which we can assume
that it will happen but not sure about it, so here we use probabilistic
reasoning.
In probabilistic reasoning, there are two ways to solve problems with
uncertain knowledge:
o Bayes’ rule
o Bayesian Statistics
As probabilistic reasoning uses probability and related terms, so before
understanding probabilistic reasoning, let's understand some common
terms:
Probability: Probability can be defined as a chance that an uncertain
event will occur. It is the numerical measure of the likelihood that an
event will occur. The value of probability always remains between 0 and
1 that represent ideal uncertainties.
1. 0=P{A)< 1, where P(A) is the probability of an event A.
2. P(A) = 0, indicates total uncertainty in an event A.
3. P(A) =1, indicates total certainty in an event A.
We can find the probability of an uncertain event by using the below
formula.
Amber of decid ooo
‘Tal unber of otunes
Probability of ocuren
+ P(A) = probability of a not happening event.
© P(A) + Pa) = 1.Event: Each possible outcome of a variable is called an event,
Sample space: The collection of all possible events is called sample
space.
Random variables: Random variables are used to represent the events
and objects in the real world.
Prior probability: The prior probability of an event is probability
computed before observing new information,
Posterior Probability: The probability that is calculated after all
evidence or information has taken into account. It is a combination of
prior probability and new information
Conditional probability:
Conditional probability is a probability of occurring an event when
another event has already happened.
Let's suppose, we want to calculate the event A when event B has
already occurred, "the probability of A under the conditions of B’, it can
be written as:
A
Prats) = = S20.
+ Where P(AAB)= Joi__ a
+ P(B)= Marginal probability of B.
If the probability of A is given and we need to find the probability of B,
then it will be given as:
PCAN)
P(A)
P(BIA)
It can be explained by using the below Venn diagram, where B is
occurred event, so sample space will be reduced to set B, and now we
can only calculate event A when event B is already occurred by dividing
the probability of P(AAB) by P( B )
Example:In a class, there are 70% of the students who like English and 40% of
the students who likes English and mathematics, and then what is the
percent of students those who like English also like mathematics?
Solutio:
Let, A is an event that a student likes Mathematics
Bis an event that a student likes English.
PCAAB) _ 04 _
PB) eid
PAI)
Hence, 57% are the students who like English also like
Mathematics.
Bayes' theorem:
* Bayes’ theorem is also known as Bayes’ rule, Bayes’ law,
or Bayesian reasoning, which determines the probability of an
event with uncertain knowledge.
In probability theory, it relates the conditional probability and
marginal probabilities of two random events.
Bayes’ theorem was named after the _British
mathematician Thomas Bayes. The Bayesian inference is an
application of Bayes’ theorem, which is fundamental to
Bayesian statisti
It is a way to calculate the value of P(B|A) with the knowledge
of P(A|B).
Bayes’ theorem allows updating the probability prediction of an
event by observing new information of the real world.
Example: If cancer corresponds to one's age then by using Bayes’
theorem, we can determine the probability of cancer more accurately
with the help of age.
Bayes’ theorem can be derived using product rule and conditional
probability of event A with known event B:
As from product rule we can write:
P(A A B)= P(A|B) P(B) or
Similarly, the probability of event B with known event A:
P(A A B)= P{BIA) P(A)
Equating right hand side of both the equations, we will get:
Pale) = (a)‘The above equation (a) is called as Bayes’ rule or Bayes’ theorem.
‘This equation is basic of most modern Al systems for probabilistic
inference.
It shows the simple relationship between joint and conditional
probabilities. Here,
P(A|B) is known as posterior, which we need to calculate, and it will
be read as Probability of hypothesis A when we have occurred an
evidence B.
P(B|A) is called the likelihood, in which we consider that hypothesis
is true, then we calculate the probability of evidence.
P(A) is called the prior probability, probability of hypothesis before
considering the evidence
P(B) is called marginal probability, pure probability of an evidence.
In the equation (a), in general, we can write P (B) = P(A)*P(B|Ai),
hence the Bayes’ rule can be written as:
P(Aj)*P(BIAy)
PAB) = pea)
(AI) = SE pean-PeIa)
Where Ai, Az, Aa,..-, Anis a set of mutually exclusive and
exhaustive events.
Applying Bayes’ rule:
Bayes’ rule allows us to compute the single term P(B|A) in terms of
P(A|B), P(B), and P(A). This is very useful in cases where we have a
good probability of these three terms and want to determine the
fourth one. Suppose we want to perceive the effect of some unknown,
cause, and want to compute that cause, then the Bayes’ rule
becomes:
P(effect|cause) P(cause)
Pletfect)
P(cause|effect) =Question: what is the probability that a patient has diseases
meningitis with a stiff neck?
se meningitis causes a patient to have a
stiff neck, and it occurs 80% of the time. He is also aware of some
more facts, which are given as follows:
‘The Known probability that a patient has meningitis disease is
1/30,000.
‘The Known probability that a patient has a stiff neck is 2%.
Let a be the proposition that patient has stiff neck and b be the
proposition that patient has meningitis. , so we can calculate the
following as:
1
Pa|b) = 0.8 P(ajb)em) 98)
Piel 1/3000 Mila) === 2S oon,
Pla)= .02
Hence, we can assume that 1 patient out of 750 patients has
meningitis disease with a stiff neck.
Example-2:
Question: From a standard deck of playing cards, a single card is
drawn. The probability that the card is king is 4/52, then
calculate posterior probability P(King|Face), which means the
drawn face card ie a Ieing card
Solution: (Facelking) king)
a
P(king): probability that the card is King= 4/52= 1/13
P(face): probability that a card is a face card= 3/13
P(Face| King): probability of face card when we assume it is a king =
a
Putting all values in equation (i) we will get:
ey
Pin face]=—F2> = 4 iia proba hata face cards aking card
yApplication of Bayes' theorem in Artificial intelligence:
Following are some applications of Bayes’ theorem:
It is used to calculate the next step of the robot when the already
executed step is given.
Bayes’ theorem is helpful in weather forecasting.
It can solve the Monty Hall problem.
UNCERTAINTY
To act rationally under uncertainty we must be able to evaluate how likely
certain things are. With FOL a fact F is only useful if it is known to be true
or false. But we need to be able to evaluate how likely it is that F is true.
By weighing likelihoods of events (probabilities) we can develop
mechanisms for acting rationally under uncertainty.
Dental Diagnosis example.
In FOL we might formulate
P.symptom(P,toothache}-+ disease(p,cavity)
disease(p,gumDisease}
disease(p,foodStuck)
When do we stop?
Cannot list alll possible causes?
We also want to rank the possibilities. We don’t want to start drilling for a
cavity before checking for more likely causes first.
= [==
What is Uncertainty?
© Uncertainty is essentially lack of information to
formulate a decision.
° Uncertainty may result in making poor or bad
decisions.
° As living creatures, we are accustomed to dealing
with uncertainty — that’s how we survive.
° Dealing with uncertainty requires reasoning under
uncertainty along with possessing a lot of
common sense.Dealing with Uncertainty
Deductive reasoning — deals with exact facts and
exact conclusions
* Inductive reasoning — not as strong as deductive —
premises support the conclusion but do not
guarantee it.
There are a number of methods to pick the best
solution in light of uncertainty.
© When dealing with uncertainty, we may have to
settle for just a good solution.
What is Uncertainty?
+ All measurements fall between two divisions on a
measuring device.
+ The distance between the two divisions is estimated.
+ This estimated value is the uncertainty and is expressed in a
final recorded value.
=>
Certain vs. Uncertain Digits
* In the example below, we can see for certain that the leaf is
greater than 3 units long. This is the certain digit.
* How far between the 3 and 4 units is the tip of the leaf?
+ ls it 3.62 3.7? 3.5?
* The digit in the tenths place is estimated — it is the
uncertain digit.Certain vs. Uncertain Digits
+ Let’s report the leaf’s length at 3.7 units long.
+ This measurement has two significant figures — one is
certain, the other is uncertain.
+ We cannot report the measurement as 3.70 units — we
don’t have a division in tenths!
+ If we reported the length as 3.70 units we'd be telling the
reader that our ruler was divided into tenths and the
hundredths place was estimated.
-_...
Practice
Compare the measurements of these two rulers:
Practice
Compare these three:
What is the value? (~10.5)
Estimated place? (tenths)
What is the value? (~8.50)
Estimated place? (hundredths)
me TT What isthe value? (~11.90)
ad au Estimated place? (hundredths)Why do we need reasoning under uncertainty?
= There are many situations where uncertainty arises:
When an insurance company offers a policy it has calculated the risk that
you will claim
‘When your brain estimates what an object is it filters random noise and fills
in missing details,
When you play a game you cannot be certain what the other player will do
A medical expert system that diagnoses disease has to deal with the results
of teste that are sometimes incorrect
= Systems which can reason about the effects of uncertainty
should do better than those that don’t
+ But how should uncertainty be represented?
Two (toy) examples
+ Ihave toothache. What is the cause?
There are many possible causes of an observed event.
If go to the dentist and he examines me, when the probe catches this
indicates there may be a cavity, rather than another cause.
The likelihood of a hypothesised cause will change as additional pieces
of evidence arrive.
Bob lives in San Francisco. He has a burglar alarm on his house, which
can be triggered by burglars and earthquakes. He has two neighbours,
John and Mary, who will call him if the alarm goes off while he is at work,
but each is unreliable in their own way. All these sources of uncertainty
can be quantified. Mary calls, how likely is it that there has been a
burglary?
Using probabilistic reasoning we can calculate how likely a hypothesised
cause is.
Dental Diagnosis example.
In FOL we might formulate
P.symptom(P,toothache) —- disease(p,cavity)
disease(p,gumDi
disease(p,foodStuck)
When do we stop? s
Cannot list all possible causes?
We also want to rank the possi
cavity before checking for more likely causes first.Probability Theory: Variables and Events
= Arandom variable can be an observation, outcome or event the value of
which is uncertain.
e.g a coin. Let's use Throw as the random variable denoting the
outcome when we toss the coin,
The set of possible outcomes for a random variable is called its domain.
The domain of Throw is {head, tail}
‘A Boolean random variable has two outcomes.
Cavity has the domain {true, false}
Toothache has the domain {true, false}
Probability Theory: Atomic events
We can create new events out of combinations of the outcomes of
random variables
Anatomic event is a complete specification of the values of the random
variables of interest
e.g. if our world consists of only two Boolean random variables, then the
world has a four possible atomic events
Toothache = true “ Cavit
Toothache = true “ Cavity = false
Toothache = false * Cavity = true
Toothache = false * Cavity = false
The set of all possible atomic events has two properties:
+ [tis mutually exhaustive (nothing else can happen)
+ tis mutually exclusive (only one of the four can happen at one time)Probability theory: probabilities
+ We can assign probabilities to the outcomes of a random variable.
P(Throw = heads) = 0.5
P(Mary_Calls = true) = 0.1
Pla)=0.3
Some simple rules governing probabilities.
All probabilities are between 0 and 1 inclusive 0