Module 3 - Bece309l - Aiml
Module 3 - Bece309l - Aiml
BECE309L – AI & ML
Knowledge
Representation
Module – 3
▪ First-order logic
Terminologies
Knowledge base (KB):
Inference Engine(IE):
• The field in AI that focuses on how to formally represent and manage knowledge about the
world in a way that allows computers to use it effectively for reasoning and decision-making.
Ontologies:
• Formal representations that define a set of concepts and the relationships between them
within a domain, providing a shared vocabulary and a structure for reasoning.
Terminologies
Knowledge Representation Structures
Knowledge Base (KB):
Taxonomy:
Frames:
▪ Reasoning
▪ Deductive Reasoning:
▪ Drawing specific conclusions from general principles or premises.
▪ Inductive Reasoning:
▪ Generalizing from specific instances to broader principles.
▪ Abductive Reasoning:
▪ Inferring the best explanation for a set of observations or facts.
Terminologies
Examples : Inference and Reasoning:
▪ Deductive Reasoning ▪ Inductive Reasoning ▪ Abductive Reasoning
▪ Premise 1: All mammals have a ▪ Example: You observe that: ▪ Example: Observation: The
backbone. The sun has risen in the east grass is wet in the morning.
▪ Premise 2: A whale is a every morning so far. ▪ Possible Explanation 1: It
mammal. ▪ It rose in the east yesterday. rained during the night.
▪ Conclusion: Therefore, a whale ▪ It rose in the east today. ▪ Possible Explanation 2: The
has a backbone. ▪ Conclusion: Therefore, the sun sprinkler was left on overnight.
▪ In this example, the conclusion will likely rise in the east ▪ Conclusion: Since there are no
is a specific fact drawn from tomorrow. signs of rainfall and the
the general principle that all ▪ Here, the conclusion sprinkler was indeed on, the
mammals have a backbone. generalizes a pattern based on best explanation is that the
specific observations. sprinkler caused the wet grass.
▪ Abductive reasoning involves
inferring the most likely
explanation from the available
evidence.
Terminologies
▪ Types of Knowledge
▪ Declarative Knowledge:
▪ Knowledge about facts and information
▪ e.g., "The Eiffel Tower is in Paris".
▪ Procedural Knowledge:
▪ Knowledge about how to perform tasks or procedures
▪ e.g., "How to change a tire".
▪ Representation Models
▪ Propositional Logic:
▪ A form of logic where statements are either true or false, and
▪ Logical operations are performed on these statements.
▪ Predicate Logic:
▪ An extension of propositional logic that includes predicates (functions that return true or false) and
quantifiers (e.g., "For all x" or "There exists an x").
Terminologies
▪ Semantic Networks:
▪ Graph-based structures where nodes represent entities or concepts and edges represent
relationships between them.
▪ Semantic relationships
▪ Hierarchy:
▪ A relationship where one concept is a more general or more specific instance of another (e.g., "Dog"
is a more specific instance of "Animal").
▪ Association:
▪ A relationship where concepts are related but not hierarchically (e.g., "Doctor" is associated with
"Hospital").
Terminologies
▪ Uncertainty and Ambiguity
▪ Probabilistic Reasoning:
▪ A method of dealing with uncertainty by using probability theory to represent and infer the
likelihood of various outcomes.
▪ Example:
▪ Imagine you're a doctor diagnosing a patient with symptoms of fever, cough, and fatigue. You know
that there are several possible causes, such as the flu, COVID-19, or a common cold. Using
probabilistic reasoning, you assign probabilities based on the prevalence of these illnesses in the
community:
▪ Probability of the flu: 50%
▪ Probability of COVID-19: 30%
▪ Probability of the common cold: 20%
▪ Given the symptoms and their likelihood, you might infer that the flu is the most likely cause.
Terminologies
▪ Knowledge Acquisition and Learning
▪ Knowledge Acquisition:
▪ The process of gathering and structuring knowledge from various sources to populate a knowledge
base.
▪ Example:
▪ A software company is developing an expert system to assist with medical diagnoses. To populate
the system with knowledge, they interview several doctors and gather information from medical
textbooks and research papers.
▪ They extract rules like:
▪ "If the patient has a high fever and cough, consider testing for the flu.“
▪ "If the patient has loss of taste and smell, test for COVID-19.“
▪ This gathered and structured knowledge is then used to create a knowledge base that the expert
system relies on to make informed decisions.
Agents that know things
▪ Agents acquire knowledge through perception, learning, language
▪ Knowledge of the effects of actions (“transition model”)
▪ Knowledge of how the world affects sensors (“sensor model”)
▪ Knowledge of the current state of the world
▪ Can keep track of a partially observable world
▪ Can formulate plans to achieve goals
Knowledge
▪ Knowledge base = set of sentences in a formal language
▪ Declarative approach to building an agent (or other system):
▪ Tell it what it needs to know (or have it Learn the knowledge)
▪ Then it can Ask itself what to do—answers should follow from the KB
▪ Agents can be viewed at the knowledge level
i.e., what they know, regardless of how implemented
▪ A single inference algorithm can answer any answerable question
Knowledge base Domain-specific facts
▪ Each time the function is executed, the agent performs three operations:
▪ Firstly, it reports to the KB what it has perceived.
▪ Secondly, it asks the KB what action it should take.
▪ Thirdly, it reports to the KB which action it has chosen.
Knowledge based Agents
▪ MAKE-PERCEPT-SENTENCE
▪ Generates a sentence that indicates that the agent perceived the given percept at
the given time.
▪ MAKE-ACTION-QUERY
▪ Generates a sentence that asks which action should be taken at the current time.
▪ MAKE-ACTION-SENTENCE
▪ Generates a sentence that asserts that the chosen action has been executed.
Knowledge based Agents
▪ KBA must able to do the following:
▪ An agent should be able to represent states, actions, etc.
▪ An agent should be able to incorporate new percepts
▪ An agent can update the internal representation of the world
▪ An agent can deduce the internal representation of the world
▪ An agent can deduce appropriate actions.
Levels of KBAs
▪ Knowledge-based agents can be classified into three levels:
▪ Knowledge Level
▪ Knowledge level is the highest level of abstraction in a knowledge-based agent.
▪ It describes what the agent knows and how it uses it to perform tasks.
▪ Knowledge level concerns the representation and organization of knowledge rather than
the implementation details.
▪ Example, suppose an automated taxi agent needs to go from a station A to station B, and it
knows the way from A to B, so this comes at the knowledge level.
▪ Logical Level
▪ Logical level is the intermediate level of abstraction in a knowledge-based agent.
▪ It describes how knowledge is represented and manipulated by inference engine.
▪ Logical story concerns the formal logic used to represent knowledge and make inferences.
▪ Example: At the logical level we can expect to the automated taxi agent to reach to the
destination B.
Levels of KBAs
▪ Knowledge-based agents can be classified into three levels:
▪ Implementation Level
▪ Implementation level is the lowest level of abstraction in a knowledge-based agent.
▪ It describes how the knowledge and inference engine is implemented using a
programming language.
▪ The implementation level is concerned with the details of the programming
language and the algorithms used to implement the knowledge and inference
engine.
Actions Performed by the Agent
▪ Inference System is used when we want to update some information
(sentences) in Knowledge-Based System and to know the already
present information.
▪ Mechanism is done by TELL and ASK operations.
▪ Inference include i.e. Producing new sentences from old.
▪ Inference must accept needs when one asks a question to KB and answer should
follow from what has been Told to KB.
▪ Agent also has a KB, which initially has some background Knowledge.
▪ Whenever, agent program is called, it performs some actions
Actions Performed by the Agent
▪ KBA engage in 3 primary operations to demonstrate intelligent behavior:
▪ TELL:
▪ Agent informs the knowledge base about the information it has perceived
from the environment.
▪ This operation allows the knowledge base to be continually updated with
new facts, ensuring the agent's decisions are based on the most current
information.
▪ ASK:
▪ Agent queries the knowledge base to determine the best course of action
based on the available knowledge.
▪ This operation is crucial for decision-making, allowing the agent to
evaluate various options before taking action.
Actions Performed by the Agent
▪ PERFORM:
▪ Based on the knowledge base's recommendation, the agent executes
the selected action.
▪ This operation demonstrates the agent's ability to interact with and
impact its environment effectively.
Designing a KBA
▪ Define the Domain and Scope
▪ Domain Understanding:
▪ Clearly define the domain in which the agent will operate.
▪ Understanding the domain helps in identifying the type of knowledge that needs to be represented and
the complexity of interactions the agent will handle.
▪ Scope Definition:
▪ Determine the scope of the agent's capabilities and functionalities.
▪ This includes specifying the tasks it will perform and the decisions it will make.
▪ Choose the Right Knowledge Representation
▪ Selecting Representation Techniques:
▪ The choice of knowledge representation (KR) technique is critical.
▪ Common KR techniques include semantic networks, frames, rules, and ontologies.
▪ Each has its strengths and is suited to different types of knowledge and reasoning processes.
▪ Representation Language:
▪ Choose a suitable knowledge representation language that can express the complexity of the domain
effectively.
▪ Languages such as OWL (Web Ontology Language), RDF (Resource Description Framework), and rule-based
languages are popular choices.
Designing a KBA
▪ Develop the Knowledge Base
▪ Gathering Knowledge:
▪ Collect comprehensive and accurate domain knowledge from subject matter experts, literature, and
existing databases.
▪ This knowledge forms the foundation of the agent's decision-making capabilities.
▪ Knowledge Organization:
▪ Organize the knowledge in a structured manner that facilitates efficient retrieval and reasoning.
▪ This includes categorizing knowledge, defining relationships, and establishing hierarchies.
Precedence Operators
▪ Inference Mechanism:
▪ Allows the agent to derive new information from the knowledge base.
▪ Common inference methods include:
▪ Modus Ponens: If we know "P → Q" and "P," we can infer "Q."
▪ Resolution: A method used in automated theorem proving to infer conclusions from the
knowledge base.
▪ Inference mechanism helps the agent make logical conclusions and update its knowledge base
accordingly.
Components of a Propositional Logic Agent
▪ Query Answering:
▪ When an agent needs to determine if a certain proposition (e.g., "The ground is
wet") is true, it uses the inference mechanism to check if this proposition can be
derived from the knowledge base.
▪ For example,
▪ Given the knowledge base with "R → W" and the fact "R" (It is raining), the
agent can infer "W" (The ground is wet).
Components of a Propositional Logic Agent
▪ Consider an agent with the following knowledge base:
▪ Propositions:
▪ D: Door is open
▪ M: Motion detected
▪ N: Night time
▪ A: Alarm should sound
▪ Rules:
▪ If the door is open and motion is detected at night, trigger the alarm D ∧ M ∧ N ⇒ A
▪ If motion is detected during the day, don’t trigger the alarm M ∧ ¬N ⇒ ¬A
▪ If door is open at night and there's no motion, don't alarm D ∧ ¬M ∧ N ⇒ ¬A
▪ Facts / Initial Knowledge:
▪ Door is open : D
▪ Motion is detected : M
▪ It is night : N
▪ Inference:
▪ A is true → Alarm must trigger.
Propositional Logic Agent
▪ Advantages and Limitations
▪ Advantages:
▪ Simplicity: Propositional logic is relatively straightforward, making it easy to
understand and implement.
▪ Decidability: Propositional logic is decidable, meaning there is an algorithm that
can determine whether any given proposition is true or false based on the
knowledge base.
▪ Limitations:
▪ Expressiveness: Propositional logic is limited in its ability to express more complex
statements about the world, especially when dealing with statements involving
variables or quantifiers (which require predicate logic).
▪ Scalability: As the number of propositions grows, the size of the knowledge base
and the complexity of inference can become unwieldy.
Propositional Logic Agent
▪ Applications
▪ Automated Theorem Proving: Propositional logic is used to prove theorems by checking if they can be
logically derived from a set of axioms.
▪ Planning: In planning problems, agents use propositional logic to generate a sequence of actions that
achieve a desired goal.
▪ Diagnosis: Agents use propositional logic to identify the cause of problems based on symptoms and
known relationships.
▪ Knowledge Representation: Used in expert systems to encode domain-specific knowledge, enabling
systems to make informed decisions or provide recommendations.
▪ Automated Reasoning: Employed in legal reasoning systems to evaluate legal arguments or in
configuration systems to ensure that all parts of a product configuration are compatible.
▪ Game Playing and Strategy: Used in simple board games or puzzle-solving applications where the
environment and rules are well-defined.
▪ Intelligent Tutoring Systems: Used in educational technology to provide personalized feedback and
guidance based on a student’s responses.
Propositional Logic Agent
Propositional Logic Agent
Smart Home Assistant: Agent controls lights and heating based on user preferences and
sensor data.
❑ KB Example: Let
❑ M: Motion detected in room
❑ D: It is dark
❑ L: Light should be on
❑ H: Heating should be on
❑ C: It is cold
❑ Rules:
❑ (M ∧ D) ⇒ L
❑ C⇒H
❑ Facts:
❑ M is true (motion detected)
❑ D is true (dark)
❑ Inference: ???
Propositional Logic Agent
Smart Home Assistant: Agent controls lights and heating based on user preferences and
sensor data.
❑ KB Example: Let
❑ M: Motion detected in room
❑ D: It is dark
❑ L: Light should be on
❑ H: Heating should be on
❑ C: It is cold
❑ Rules:
❑ (M ∧ D) ⇒ L
❑ C⇒H
❑ Facts:
❑ M is true (motion detected)
❑ D is true (dark)
❑ Inference:
❑ L is true (light should be on)
Propositional Logic Agent
Traffic Signal Controller: Control car lights based on rules.
❑ KB Example: Let
❑ R: Red light is on
❑ G: Green light is on
❑ S: Car should stop
❑ M: Car should move
❑ Rules:
❑ R⇒S
❑ G⇒M
❑ ¬(R ∧ G) (cannot be both red and green)
❑ Facts:
❑ R is true
❑ Inference: ???
Propositional Logic Agent
Traffic Signal Controller: Control car lights based on rules.
❑ KB Example: Let
❑ R: Red light is on
❑ G: Green light is on
❑ S: Car should stop
❑ M: Car should move
❑ Rules:
❑ R⇒S
❑ G ⇒ M¬(R ∧ G) (cannot be both red and green)
❑ Facts:
❑ R is true
❑ Inference:
❑ S is true (car must stop)
Propositional Logic Agent
Traffic Signal Controller: Control car lights based on rules.
❑ KB Example: Let
❑ R: Red light is on
❑ G: Green light is on
❑ S: Car should stop
❑ M: Car should move
❑ Rules:
❑ R⇒S
❑ G ⇒ M¬(R ∧ G) (cannot be both red and green)
❑ Facts:
❑ R is true
❑ Inference:
❑ S is true (car must stop)
Propositional Logic Agent
A robot is navigating a 2D grid (like a maze or terrain). It must determine which adjacent cells are safe
or contain obstacles, using its local sensor that can detect if an obstacle is nearby
(in any of the 4 directions: N, S, E, W).
Symbol Table
Symbol Meaning
𝐶𝑖𝑗 Cell (i,j) is Clear (no obstacle)
𝑂𝑖𝑗 Cell (i,j) contains an Obstacle
𝑆𝑖𝑗 Sensor at cell (i,j) is triggered (i.e., obstacle nearby)
𝐴𝑖𝑗 Agent is currently at cell (i,j)
¬ Logical NOT
∧,∨, ⇒ Logical AND, OR, and IMPLIES
Propositional Logic Agent
Rules
Rule 1: Sensor implication (triggered)
If the sensor at (i,j) is triggered → at least one of the adjacent cells has an obstacle.
So:
At (2,2) → no obstacle near the agent
At (2,3) → obstacle nearby, but not necessarily in (2,3)
At (2,1) → no obstacle near that cell
Propositional Logic Agent
Facts
Sample KB Facts:
Agent is at 𝑨𝟐𝟐 ∶ ¬𝑺𝟐𝟐 , 𝑺𝟐𝟑 , ¬𝑺𝟐𝟏
Inferences
Apply Rule 2 to (2,2):
From ¬𝑺𝟐𝟐 ¬S22 ⇒ (¬O21 ∧ ¬O23 ∧ ¬O12 ∧ ¬O32)
infer: ¬𝑶𝟐𝟏 ∧ ¬𝑶𝟐𝟑 ∧ ¬𝑶𝟏𝟐 ∧ ¬𝑶𝟑𝟐
Inference 1:
Adjacent cells to (2,2) are all safe.
Propositional Logic Agent
Facts
Sample KB Facts:
Agent is at 𝑨𝟐𝟐 ∶ ¬𝑺𝟐𝟐 , 𝑺𝟐𝟑 , ¬𝑺𝟐𝟏
Inferences
From ¬𝑺𝟐𝟐
infer: ¬𝑶𝟐𝟏 ∧ ¬𝑶𝟐𝟑 ∧ ¬𝑶𝟏𝟐 ∧ ¬𝑶𝟑𝟐
Inferences
From ¬𝑺𝟐𝟐
infer: ¬𝑶𝟐𝟏 ∧ ¬𝑶𝟐𝟑 ∧ ¬𝑶𝟏𝟐 ∧ ¬𝑶𝟑𝟐
From ¬𝑺𝟐𝟏
infer: ¬𝑺𝟐𝟏 ⇒ (¬𝑶𝟏𝟏 ∧ ¬𝑶𝟐𝟐 ∧ ¬𝑶𝟑𝟏 ∧ ¬𝑶𝟐𝟎) Apply Rule 1 to (2,3):
𝑺𝟐𝟑 ⇒ (𝑶𝟐𝟐 ∨ 𝑶𝟐𝟒 ∨ 𝑶𝟏𝟑 ∨ 𝑶𝟑𝟑)
From 𝑺𝟐𝟑
infer: 𝑺𝟐𝟑 ⇒ (𝑶𝟐𝟒 ∨ 𝑶𝟏𝟑 ∨ 𝑶𝟑𝟑) Inference 3:
But — we already know that:
¬O22 (no obstacle at current cell)
¬O23 (the cell where sensor is triggered itself)
First-order Logic /
Predicate Logic /
First-order Predicate Logic
First Order Logic (FOL)
▪ Propositional Logic (PL), which can only represent the facts, which are either true or false.
▪ PL is not sufficient to represent the complex sentences or natural language statements.
▪ PL has very limited expressive power.
▪ Consider the following sentence, which we cannot represent using PL logic.
▪ "Some humans are intelligent", or
▪ "Sachin likes cricket.“
▪ Objects: A, B, people, numbers, colors, theories, ▪ Variables: Variables are symbols that can
squares, … represent any object in the domain.
▪ Examples: Variables such as x, y, and z can
▪ Relations: It can be unary relation such as: red, represent any object in the domain.
round, is adjacent, or n-any relation such as: the
sister of, brother of, has color, comes between, … ▪ Predicates: Predicates represent properties of
objects or relationships between objects.
▪ Function: Father of, best friend, third inning of, end
of, … ▪ Examples: P(x) could mean “x is a person”,
while Q(x, y) could mean “x is friends with y”.
▪ Constants: Constants are symbols that
represent specific objects in the domain. ▪ Functions: Functions map objects to other
objects.
▪ Examples: If a, b, and c are constants, they
might represent specific individuals like Alice, ▪ Examples: f(x) could represent a function that
Bob, and Charlie. maps an object x to another object, like “the
father of x”.
First Order Logic : Components
• Logical Connectives:
• These include ∧ (and), ∨ (or), ¬ (not), → (implies), and (if and only if).
• Examples: P(x) ∧ Q(x, y) means “P(x) and Q(x, y) are both true”.
• Equality:
• States that two objects are the same.
• Examples: x = y asserts that x and y refer to the same object.
▪ Truth Assignment: Determines the truth value of each formula based on the
interpretation.
First Order Logic : Key Components
▪ Two main components:
▪ Syntax:
▪ Syntax represents the rules
to write expressions in First
Order Logic in Artificial
Intelligence.
▪ Semantics:
▪ Semantics refers to the
techniques that we use to
evaluate an expression of
First Order Logic in AI.
▪ These techniques use
various known relations and
facts of the respective
environment to deduce the
boolean value of the given
First Order Logic expression.
First Order Logic : Key Components
▪ Atomic Sentences:
▪ Atomic sentences are the most basic expressions of First Order Logic in AI.
▪ These sentences comprise a predicate followed by a set of terms inside a
parenthesis.
▪ Formally stating, the structure of an atomic sentence looks like the following.
▪ Predicate 1 ( term 1, term 2, term 3,...)
▪ Predicate 2 ( term 2, term 4,...)
First Order Logic : Key Components
▪ Complex Sentences:
▪ Complex sentences can be constructed by combining atomic sentences using connectives like
▪ AND (∧), OR (∨), NOT (¬), IMPLIES (⇒), IF AND ONLY IF (⇔) etc.
▪ Formally stating, if c1, c2, ... c3,c4... represent connectives, a complex sentence in First Order Logic in
AI can be defined as follows.
▪ Predicate 1( term 1, term 2,...) c1 Predicate 2( term 1, term 2,...)
▪ Quantifiers enable us to determine the range and scope of a variable in a logical expression.
▪ Two types of quantifiers :
▪ Universal &
▪ Existential Quantifier
First Order Logic : Key Components
▪ Universal Quantifier:
▪ Is a symbol in a logical expression that signifies that the given expression is true in its range for all
instances of the concerned entity.
▪ Represented by the symbol ∀ (an inverted A).
▪ If x is a variable, then ∀x is read as "For all x" or "For every x" or "For each x".
▪ For example:
▪ Let us take the sentence, "All cats like fish".
▪ Let us take a variable x which can take the value of "cat".
▪ Let us take a predicate cat (x) which is true if x is a cat.
▪ Similarly, let us take another predicate likes (x, y) which is true if x likes y.
▪ Therefore, using the universal quantifier ∀, we can write
∀𝒙, 𝒄𝒂𝒕 (𝒙) ⟹ 𝒍𝒊𝒌𝒆𝒔 (𝒙, 𝒇𝒊𝒔𝒉 ).
▪ This expression is read as "For all x, if x is a cat, then x likes to fish".
First Order Logic : Key Components
▪ Existential Quantifier
▪ Is a symbol in a logical expression that signifies that the given expression is true in its range for at
least one of the instances of the concerned entity.
▪ It is represented by the symbol ∃ (an inverted E).
▪ If x is a variable, then ∃x is read as "There exists x" or "For some x" or "For at least one x".
▪ Similarly, let us take another predicate likes (x,y), which is true if xx likes yy.
▪ Therefore, using the existential quantifier ∃, we can write
▪ ∃ 𝒙 𝒔𝒕𝒖𝒅𝒆𝒏𝒕 (𝒙) ∧ 𝒍𝒊𝒌𝒆𝒔 (𝒙, 𝒊𝒄𝒆 − 𝒄𝒓𝒆𝒂𝒎 )
▪ This expression reads, "There exists some x such that x is a student and also likes ice cream".
First Order Logic : Key Components
▪ Examples of FOL using quantifier:
▪ All birds fly.
In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as follows.
∀𝒙 𝒃𝒊𝒓𝒅(𝒙) → 𝒇𝒍𝒚(𝒙)