II-I AI Unit I
II-I AI Unit I
UNIT - I
Introduction: AI problems, foundation of AI and history of AI intelligent agents: Agents
and Environments, the concept of rationality, the nature of environments, structure of
agents, problem solving agents, problem formulation
Foundations of AI
Artificial Intelligence draws upon a diverse set of disciplines, forming its intellectual
bedrock. These foundational areas provide the theoretical frameworks, tools, and
inspirations for building intelligent systems.
1. Philosophy: Ancient philosophers like Plato and Aristotle pondered the nature of
knowledge, reasoning, and the mind. Their work on logic, rationality, and the acquisition
of knowledge laid conceptual groundwork. Modern philosophy continues to contribute to
AI by exploring questions of consciousness, ethics, and the implications of AI on
humanity.
2. Mathematics and Logic:
○ Logic: Formal logic, dating back to Aristotle, provides a precise language for
representing knowledge and performing rigorous reasoning. Boolean algebra
(George Boole), propositional logic, and predicate logic are fundamental to
rule-based systems, expert systems, and automated theorem proving.
○ Probability Theory and Statistics: These are crucial for handling uncertainty,
making predictions, and enabling learning from data. Bayesian inference,
regression, and statistical modeling are at the heart of modern machine learning.
○ Calculus and Linear Algebra: These mathematical tools are indispensable for
optimization algorithms used in neural networks and other learning models.
3. Computer Science: The advent of the programmable digital computer in the 1940s
provided the necessary hardware for AI to move from theory to practice.
○ Algorithms and Data Structures: These are the building blocks of any AI
program, determining how information is processed and organized.
○ Computational Theory (Alan Turing): Alan Turing's concept of a "universal
machine" and the "Turing Test" (1950) provided a theoretical model for
Dept of CSE
GITAMW 1
Artificial Intelligence
The concept of an "intelligent agent" is relatively central to modern AI. An intelligent agent is
anything that perceives its environment through sensors and acts upon that environment through
effectors, striving to achieve its goals. This agent paradigm has evolved alongside the broader
history of AI.
● Ancient Myths and Automata: The idea of artificial beings with intelligence dates back
to ancient Greek myths and legends. Early mechanical automatons (e.g., in ancient Egypt,
Greece, and later in the Renaissance) embodied the human desire to create machines that
could act autonomously.
Dept of CSE
GITAMW 2
Artificial Intelligence
● 1950: Alan Turing's "Computing Machinery and Intelligence": This seminal paper
introduced the Turing Test and explored the possibility of machine intelligence,
profoundly influencing early AI.
● 1956: The Dartmouth Workshop: Coined the term "Artificial Intelligence." This
workshop brought together pioneers like John McCarthy, Marvin Minsky, Nathaniel
Rochester, and Claude Shannon. The early focus was on symbolic AI, aiming to simulate
human intelligence through logical reasoning, problem-solving, and rule-based systems.
● Early Programs and "Agents": While the term "intelligent agent" wasn't explicitly
formalized as it is today, many early AI programs functioned as rudimentary
agents:
○ Logic Theorist (1956, Newell, Simon, Shaw): Considered one of the first AI
programs, it could prove theorems in propositional logic, acting as an agent to
explore a problem space.
○ General Problem Solver (GPS, 1957, Newell & Simon): Aimed to solve a wide
range of problems by identifying the difference between the current state and the
goal state and applying operators to reduce that difference. This was a classic
example of a goal-driven, deliberative agent.
○ ELIZA (1966, Joseph Weizenbaum): A pioneering natural language processing
program that simulated a psychotherapist. While simple (using pattern matching),
it demonstrated how a system could interact with a human user and provide
seemingly intelligent responses, acting as a reactive agent.
○ Shakey the Robot (1966-1972, SRI International): A groundbreaking mobile
robot that combined perception (vision), planning, and problem-solving. Shakey
could analyze its environment, devise and execute plans to achieve goals, and
reason about its actions. This was a sophisticated early example of a deliberative
agent interacting with a physical environment.
Dept of CSE
GITAMW 3
Artificial Intelligence
● Expert Systems: During the 1970s and 80s, the focus shifted to "expert systems," which
were knowledge-based AI programs designed to mimic the decision-making ability of a
human expert in a specific domain (e.g., MYCIN for medical diagnosis, XCON for
configuring computer systems). These were essentially complex rule-based agents
operating within narrow, well-defined domains. They showed practical utility but also
exposed limitations in handling uncertainty and common sense.
● "AI Winters": Periods of reduced funding and interest in AI due to overly optimistic
predictions and limitations of the technology at the time.
● Intelligent Agent Paradigm Takes Center Stage: The 1990s saw a renewed focus on
the concept of intelligent agents, often in reaction to the limitations of purely symbolic
AI. Researchers started to classify agents based on their capabilities:
○ Simple Reflex Agents: Act based on current percepts, ignoring past history (e.g., a
thermostat).
○ Model-Based Reflex Agents: Maintain an internal state (a model of the world)
based on past percepts, allowing them to operate in partially observable
environments.
○ Goal-Based Agents: Have explicit goals and plan sequences of actions to achieve
them.
○ Utility-Based Agents: Maximize their "utility" (a measure of how desirable a state
is) when choosing actions, allowing for more nuanced decision-making in
complex situations with trade-offs.
○ Learning Agents: Adapt and improve their performance over time by learning
from experience.
● Emergence of Machine Learning: The 1990s and 2000s saw a strong resurgence in
machine learning, particularly with statistical methods and the re-emergence of neural
networks (connectionism). This provided agents with the ability to learn from data rather
than being explicitly programmed with rules for every situation. This significantly
Dept of CSE
GITAMW 4
Artificial Intelligence
History of AI Agents:
The history of AI intelligent agents can be traced back to the early days of AI research in the
1950s, with the initial focus on symbolic reasoning and rule-based systems. These early systems,
while limited in their capabilities, laid the foundation for the development of more sophisticated
agents that could perceive, reason, and act in their environment. Over time, advancements in
machine learning, particularly deep learning, have enabled AI agents to handle complex tasks
and interact with humans in more natural and adaptive ways
Dept of CSE
GITAMW 5
Artificial Intelligence
Symbolic Reasoning:
Early AI focused on representing knowledge and reasoning using symbolic logic and
rules.
Game Playing:
Programs like Samuel's checkers program demonstrated the ability of AI to learn and
improve through self-play and reinforcement learning.
ELIZA:
This chatbot, developed in the mid-1960s, simulated a psychotherapist by using pattern
matching and substitution rules.
Neural Networks:
Research into neural networks, inspired by the structure of the human brain, gained
prominence.
Expert Systems:
Rule-based systems were used to create expert systems, which aimed to mimic the
decision-making capabilities of human experts in specific domains
Increased Computing Power:
Advancements in computing power and the availability of large datasets enabled the
development of more complex AI models.
Dept of CSE
GITAMW 6
Artificial Intelligence
Agentic AI:
The field is moving towards agentic AI, where AI systems can autonomously plan,
execute, and adapt to achieve complex goals
Agents in AI
An AI agent is a software program that can interact with its surroundings, gather information,
and use that information to complete tasks on its own to achieve goals set by humans.
● Sensors: For example, a self-driving car uses cameras and radar to detect objects.
● User Input: Chatbots read text or listen to voice commands.
● Databases & Documents: Virtual assistants search records or knowledge bases for
relevant data.
Dept of CSE
GITAMW 7
Artificial Intelligence
After gathering data, AI agents analyze it and decide what to do next. Some agents rely on
pre-set rules, while others utilize machine learning to predict the best course of action.
Advanced agents may also use retrieval-augmented generation (RAG) to access external
databases for more accurate responses.
Once an agent makes a decision, it performs the required task, such as:
Some AI agents can learn from past experiences to improve their responses. This self-learning
process, often referred to as reinforcement learning, allows agents to refine their behavior over
time. For example, a recommendation system on a streaming platform learns users' preferences
and suggests content accordingly.
Architecture of AI Agents
The architecture of AI agents serves as the blueprint for how they function.
● Profiling Module: This module helps the agent understand its role and purpose. It
gathers information from the environment to form perceptions.
Example: A self-driving car uses sensors and cameras to detect obstacles.
● Memory Module: The memory module enables the agent to store and retrieve past
experiences. This helps the agent learn from prior actions and improve over time.
Example: A chatbot remembers past conversations to give better responses.
● Planning Module: This module is responsible for decision-making. It evaluates
situations, weighs alternatives, and selects the most effective course of action.
Example: A chess-playing AI plans its moves based on future possibilities.
Dept of CSE
GITAMW 8
Artificial Intelligence
● Action Module: The action module executes the decisions made by the planning
module in the real world. It translates decisions into real-world actions.
Example: A robot vacuum moves to clean a designated area after detecting dirt.
AI Agent Classification
An agent is a system designed to perceive its environment, make decisions, and take actions to
achieve specific goals. Agents operate autonomously, without direct human control, and can be
classified based on their behavior, environment, and number of interacting agents.
● An AI system includes the agent, which perceives the environment through sensors
and acts using actuators, and the environment, in which it operates.
● AI agents are essential in fields like robotics, gaming, and intelligent systems, where
they use various techniques such as machine learning to enhance decision-making
and adaptability.
Dept of CSE
GITAMW 9
Artificial Intelligence
Interaction of
Agents with the Environment
Structure of an AI Agent
The structure of an AI agent is composed of two key components: Architecture and Agent
Program. Understanding these components is essential to grasp how intelligent agents function.
1. Architecture
Architecture refers to the underlying hardware or system on which the agent operates. It is the
"machinery" that enables the agent to perceive and act within its environment. Examples of
architecture include devices equipped with sensors and actuators, such as a robotic car,
camera, or a PC. These physical components enable the agent to gather sensory input and
execute actions in the world.
2. Agent Program
Dept of CSE
GITAMW 10
Artificial Intelligence
Agent Program is the software component that defines the agent's behavior. It implements the
agent function, which is a mapping from the agent's percept sequence (the history of all
perceptions it has gathered so far) to its actions. The agent function determines how the agent
will respond to different inputs it receives from its environment.
The overall structure of an AI agent can be understood as a combination of both the architecture
and the agent program. The architecture provides the physical infrastructure, while the agent
program dictates the decision-making and actions of the agent based on its perceptual inputs.
Characteristics of an Agent
Types of Agents
Dept of CSE
GITAMW 11
Artificial Intelligence
Simple reflex agents act solely based on the current percept and, percept history (record of
past perceptions) is ignored by these agents. Agent function is defined by condition-action
rules.
Simple reflex agents are effective in environments that are fully observable (where the current
percept gives all needed information about the environment). In partially observable
environments, simple reflex agents may encounter infinite loops because they do not consider
the history of previous percepts. Infinite loops might be avoided if the agent can randomize its
actions, introducing some variability in its behavior.
Dept of CSE
GITAMW 12
Artificial Intelligence
Model-based reflex agents finds a rule whose condition matches the current situation or percept.
It uses a model of the world to handle situations where the environment is only partially
observable.
● The agent tracks its internal state, which is adjusted based on each new percept.
● The internal state depends on the percept history (the history of what the agent has
perceived so far).
The agent stores the current state internally, maintaining a structure that represents the parts of
the world that cannot be directly seen or perceived. The process of updating the agent’s state
requires information about:
3. Goal-Based Agents
Dept of CSE
GITAMW 13
Artificial Intelligence
Goal-based agents make decisions based on their current distance from the goal and every action
the agent aims to reduce the distance from goal. They can choose from multiple possibilities,
selecting the one that best leads to the goal state.
● Knowledge that supports the agent's decisions is represented explicitly, meaning it's
clear and structured. It can also be modified, allowing for adaptability.
● The ability to modify the knowledge makes these agents more flexible in different
environments or situations.
Goal-based agents typically require search and planning to determine the best course of action.
Goal-Based Agents
4. Utility-Based Agents
Utility-based agents are designed to make decisions that optimize their performance by
evaluating the preferences (or utilities) for each possible state. These agents assess multiple
Dept of CSE
GITAMW 14
Artificial Intelligence
alternatives and choose the one that maximizes their utility, which is a measure of how desirable
or "happy" a state is for the agent.
● Achieving the goal is not always sufficient; for example, the agent might prefer a
quicker, safer, or cheaper way to reach a destination.
● The utility function is essential for capturing this concept, mapping each state to a
real number that reflects the agent’s happiness or satisfaction with that state.
Since the world is often uncertain, utility-based agents choose actions that maximize expected
utility, ensuring they make the most favorable decision under uncertain conditions.
Utility-Based Agents
Dept of CSE
GITAMW 15
Artificial Intelligence
5. Learning Agent
A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then is able to act and adapt
automatically through learning. A learning agent has mainly four conceptual components, which
are:
1. Learning element: It is responsible for making improvements by learning from the
environment.
2. Critic: The learning element takes feedback from critics which describes how well
the agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action.
4. Problem Generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.
Learning Agent
6. Multi-Agent Systems
Dept of CSE
GITAMW 16
Artificial Intelligence
Multi-Agent Systems (MAS) consists of multiple interacting agents working together to achieve
a common goal. These agents can be autonomous or semi-autonomous, capable of perceiving
their environment, making decisions, and taking action.
● Homogeneous MAS: Agents have the same capabilities, goals, and behaviors.
● Heterogeneous MAS: Agents have different capabilities, goals, and behaviors,
leading to more complex but flexible systems.
● Cooperative MAS: Agents work together to achieve a common goal.
● Competitive MAS: Agents work against each other for their own goals.
MAS can be implemented using game theory, machine learning, and agent-based modeling.
7. Hierarchical Agents
Hierarchical Agents are organized into a hierarchy, with high-level agents overseeing the
behavior of lower-level agents. The high-level agents provide goals and constraints, while the
low-level agents carry out specific tasks. They are useful in complex environments with many
tasks and sub-tasks.
This structure is beneficial in complex systems with many tasks and sub-tasks, such as robotics,
manufacturing, and transportation. Hierarchical agents allow for efficient decision-making and
resource allocation, improving system performance. In such systems, high-level agents set goals,
and low-level agents execute tasks to achieve those goals.
Uses of Agents
Agents are used in a wide range of applications in artificial intelligence, including:
● Robotics: Agents can be used to control robots and automate tasks in manufacturing,
transportation, and other industries.
● Smart homes and buildings: Agents can be used to control heating, lighting, and
other systems in smart homes and buildings, optimizing energy use and improving
comfort.
Dept of CSE
GITAMW 17
Artificial Intelligence
● Transportation systems: Agents can be used to manage traffic flow, optimize routes
for autonomous vehicles, and improve logistics and supply chain management.
● Healthcare: Agents can be used to monitor patients, provide personalized treatment
plans, and optimize healthcare resource allocation.
● Finance: Agents can be used for automated trading, fraud detection, and risk
management in the financial industry.
● Games: Agents can be used to create intelligent opponents in games and simulations,
providing a more challenging and realistic experience for players.
Overall, agents are a versatile and powerful tool in artificial intelligence that can help solve a
wide range of problems in different fields.
Dept of CSE
GITAMW 18
Artificial Intelligence
Rational action is crucial for an AI agent since the AI reinforcement learning algorithm rewards
the best possible action with a positive reward and penalizes the worst possible action with a
negative reward. A rational AI agent is a system that performs actions to obtain the best possible
outcome or, in the case of uncertainty, the best-expected outcome.
Introduction to Rationality in AI
Rationality in AI refers to the ability of an artificial agent to make decisions that maximize its
performance based on the information it has and the goals it seeks to achieve. In essence, a
rational AI system aims to choose the best possible action from a set of alternatives to achieve a
specific objective. This involves logical reasoning, learning from experiences, and adapting to
new situations.
Types of Rationality
There are two primary types of rationality in AI: bounded rationality and perfect rationality.
1. Bounded Rationality
Bounded rationality recognizes that decision-making capabilities are limited by the information
available, cognitive limitations, and time constraints. AI systems operating under bounded
Dept of CSE
GITAMW 19
Artificial Intelligence
rationality use heuristics (mental shortcuts or rules of thumb) and approximations to make
decisions that are good enough, rather than optimal. This approach is practical in real-world
applications where perfect information and infinite computational resources are unavailable.
2. Perfect Rationality
Perfect rationality assumes that an AI system has access to complete information, unlimited
computational power, and infinite time to make decisions. While this is an idealized concept, it
serves as a benchmark for evaluating the performance of AI systems. Perfectly rational AI would
always make the best possible decision in any given situation.
Dept of CSE
GITAMW 20
Artificial Intelligence
● This interest is frequently fueled by a desire to understand our minds' potential and
limitations. Various theories even regard rationality as the essence of being human,
frequently to distinguish humans from other species.
Achieving Rationality in AI
Implementing rationality in AI involves several techniques and approaches:
1. Decision Theory
Decision theory provides a framework for making rational choices by evaluating the potential
outcomes of different actions. It combines probability theory and utility theory to calculate the
expected utility of each action and select the one with the highest expected utility. This approach
is widely used in AI for planning and problem-solving tasks.
2. Game Theory
Game theory studies strategic interactions between agents, where the outcome for each
participant depends on the actions of others. In AI, game theory is used to model and analyze
competitive and cooperative scenarios, enabling agents to make rational decisions in multi-agent
environments.
3. Machine Learning
Machine learning algorithms enable AI systems to learn from data and improve their
decision-making over time. By identifying patterns and relationships in data, machine learning
models can make more accurate predictions and choose actions that maximize performance.
Logical reasoning involves using formal rules and knowledge representations to infer new
information and make decisions. Techniques such as propositional logic, predicate logic, and
Dept of CSE
GITAMW 21
Artificial Intelligence
Bayesian networks help AI systems reason about the world and make rational decisions based on
the knowledge they possess.
5. Reinforcement Learning
Reinforcement learning is a type of machine learning where an agent learns to make decisions by
interacting with an environment. The agent receives feedback in the form of rewards or penalties
and uses this feedback to learn a policy that maximizes cumulative rewards. This approach is
particularly effective for sequential decision-making problems.
In many real-world situations, AI systems must make decisions with incomplete or uncertain
information. Handling uncertainty effectively requires sophisticated probabilistic reasoning and
robust learning algorithms.
2. Computational Complexity
Optimal decision-making often involves searching through a vast space of possible actions and
outcomes. This can be computationally expensive, especially for complex problems. Balancing
the trade-off between computational resources and decision quality is a key challenge.
Rational decisions made by AI systems can have significant ethical and social implications.
Ensuring that AI systems align with human values and do not cause harm requires careful
consideration of ethical principles and societal impact.
Applications of Rational AI
Rational AI has numerous applications across various domains:
1. Autonomous Vehicles
Dept of CSE
GITAMW 22
Artificial Intelligence
Autonomous vehicles use rational decision-making to navigate safely, avoid obstacles, and
optimize routes. They must make real-time decisions based on sensor data and changing traffic
conditions.
2. Healthcare
3. Finance
AI is used in finance for algorithmic trading, risk management, and fraud detection. Rational AI
systems analyze market data, predict trends, and make investment decisions to maximize returns.
4. Robotics
Rationality in Decision-Making
● Research and brainstorm possible solutions: The probability of addressing your
problem increases when you broaden your pool of prospective solutions. To identify
as many feasible answers as you can, you should research your problem thoroughly
using both the internet and your knowledge. To come up with more ideas,
brainstorming with others is another option.
● Set standards of success and failure for your potential solutions: Setting a
threshold for measuring the success and failure of your ideas allows you to identify
which ones will genuinely address your problem. However, your expectations of
success should not be unrealistic. You'd be unable to discover a solution. However, if
your standards are practical, quantitative, and targeted, you will be able to identify
one.
Dept of CSE
GITAMW 23
Artificial Intelligence
● Describe the potential outcomes of each solution: The next step is to determine the
repercussions(consequences) of each of your solutions. Create a table of each
alternative's strengths and shortcomings and compare them. You should also prioritize
your solutions in a list, from greatest to worst possibility of solving the problem.
● Choose the best solution and test it: Choose the best solution and test it after
evaluating all of your options. You can also begin to monitor your early results at this
stage.
● Implement the solution or try a new one: If your suggested answer passed your test
and solved your problem, it is the most logical choice you can make. You should use
it to fix your current problem as well as any future ones that arise. If the answer did
not fix your problem, try another possible solution that you came up with.
Nature of Environments:
In artificial intelligence (AI), the "nature of the environment" refers to the characteristics of the
environment in which an AI agent operates. These characteristics influence how the agent
perceives, interacts with, and learns from its surroundings. Key aspects include whether the
environment is fully or partially observable, deterministic or stochastic, static or dynamic, and
discrete or continuous
Fully Observable:
The agent has complete access to all relevant information about the environment at any
given time. For example, in a game of chess, all pieces and their positions are visible
Partially Observable:
The agent only has access to partial information about the environment, making it more
challenging to make decisions. Driving in fog, where visibility is limited, is an example
Dept of CSE
GITAMW 24
Artificial Intelligence
Deterministic:
The outcome of an action is always predictable based on the agent's input. For example,
in a game of tic-tac-toe, each move has a predictable outcome.
Stochastic:
The outcome of an action is uncertain and can be influenced by random factors.Playing
poker, where the cards dealt are random, is an example.
Static:
The environment doesn't change over time unless the agent takes an action. A crossword
puzzle is an example.
Dynamic:
The environment can change independently of the agent's actions, requiring the agent to
adapt to the evolving situation.
Discrete:
The environment has a finite and countable number of states or actions. Turn-based
games like checkers are examples.
Continuous:
The environment has an infinite number of states or actions. Controlling the throttle of a
car is an example
Dept of CSE
GITAMW 25
Artificial Intelligence
The nature of the environment significantly impacts the design and development of AI agents,
influencing their perception, decision-making, and learning capabilities. Understanding these
properties is crucial for creating effective and efficient AI systems
● 1. Goal Formulation: The first step is to clearly define the goal the agent is trying to
achieve. This involves specifying the desired state of the world that the agent aims to
reach. For example, in a navigation problem, the goal might be "arrive at destination X."
● 2. Problem Formulation: Once the goal is set, the agent needs to formulate a problem
that can be solved. This involves:
○ Initial State: The current state of the world from which the agent starts.
○ Actions (Operators): A set of possible actions the agent can perform. Each
action has preconditions (what must be true for the action to be taken) and effects
(how the action changes the state of the world).
Dept of CSE
GITAMW 26
Artificial Intelligence
○ Transition Model: A description of what state results from performing any action
in any state.
○ Goal Test: A function that determines whether a given state is a goal state.
○ Path Cost: A function that assigns a numerical cost to each path (sequence of
actions). This is often used to find the most efficient solution.
● 3. Search Algorithm: The heart of a problem-solving agent is its search algorithm. This
algorithm explores the state space (the set of all possible states reachable from the initial
state) to find a path from the initial state to a goal state. Different algorithms have varying
strengths and weaknesses in terms of completeness (guaranteed to find a solution if one
exists), optimality (guaranteed to find the best solution), time complexity, and space
complexity.
● 4. Solution Execution: Once a solution (a sequence of actions) is found by the search
algorithm, the agent needs to execute these actions in the environment. This may involve
interacting with the real world, which introduces challenges such as uncertainty and
partial observability.
Problem-solving agents employ various search strategies, broadly categorized into uninformed
and informed search:
These strategies do not use any domain-specific knowledge beyond the problem formulation.
They explore the state space systematically.
Dept of CSE
GITAMW 27
Artificial Intelligence
These strategies use problem-specific knowledge (heuristics) to guide the search, making them
more efficient than uninformed methods. A heuristic function h(n) estimates the cost of the
cheapest path from node n to a goal state.
Dept of CSE
GITAMW 28
Artificial Intelligence
● 1. State Space Explosion: The number of possible states can grow exponentially with
the number of variables, making exhaustive search computationally infeasible for many
real-world problems.
● 2. Heuristic Quality: The performance of informed search heavily depends on the
quality of the heuristic function. Designing effective and admissible/consistent heuristics
can be challenging.
Dept of CSE
GITAMW 29
Artificial Intelligence
A problem-solving agent in AI needs to formulate a problem clearly before it can even begin to
search for a solution. Problem formulation is the process of precisely defining all the elements
necessary for an AI agent to understand what it needs to achieve and how it can do it. Without a
well-formulated problem, an agent wouldn't know its starting point, its destination, or the
permissible moves in between.
1. Initial State
Dept of CSE
GITAMW 30
Artificial Intelligence
The initial state is the starting point of the agent. It's a complete and unambiguous description of
the world or the environment at the moment the problem-solving process begins.
● Characteristics:
○ Precise: It must clearly define all relevant aspects of the environment.
○ Unambiguous: There should be no room for misinterpretation.
○ Representable: It must be possible to represent this state within the agent's
internal data structures (e.g., as a set of facts, a configuration of variables, a
specific arrangement of objects).
● Examples:
○ Chess: The initial state is the standard chessboard configuration at the start of a
game.
○ Pathfinding: The initial state is the agent's current location on a map.
○ 8-Puzzle: The initial state is a specific arrangement of tiles on the 3x3 grid.
○ Robot Navigation: The robot's current coordinates, orientation, and the state of
its sensors.
2. Actions (Operators)
The actions (also known as operators or moves) are the set of possible actions that the agent can
perform. Each action transforms the current state into a new state. For each action, we need to
define:
● Preconditions: What must be true in the current state for this action to be applicable. If
the preconditions are not met, the action cannot be performed.
● Effects (Transition Model): How the action changes the state of the world. This
describes the resulting state after the action is executed. The collection of all actions and
their effects defines the transition model of the environment, which specifies what state
results from performing any action in any state.
● Examples:
○ Chess: Actions include "Move pawn from e2 to e4," "Move knight from b1 to
c3," etc. Preconditions might include "e2 must contain a pawn," "e4 must be
Dept of CSE
GITAMW 31
Artificial Intelligence
3. Goal Test
The goal test is a function that determines whether a given state is a goal state. It takes a state as
input and returns true if the state satisfies the goal criteria, and false otherwise.
● Characteristics:
○ Clear and Concise: It should be easy to verify if a state is a goal state.
○ Binary Output: Returns either true or false.
● Examples:
○ Chess: Is the opponent's King in checkmate?
○ Pathfinding: Is the agent at the specified destination coordinates?
○ 8-Puzzle: Are the tiles arranged in the desired order (e.g.,
1-2-3-4-5-6-7-8-blank)?
○ Robot Navigation: Has the robot reached the target location and completed all
required sub-tasks?
4. Path Cost
Dept of CSE
GITAMW 32
Artificial Intelligence
The path cost (or step cost) is a function that assigns a numerical cost to each path (sequence of
actions). This is particularly important when there are multiple paths to a goal, and the agent
needs to find the optimal or most efficientsolution.
● Characteristics:
○ Additive: The cost of a path is typically the sum of the costs of the individual
actions along that path.
○ Non-negative: Action costs are usually non-negative.
● Examples:
○ Pathfinding:
■ Uniform Cost: Each step has a cost of 1 (useful for finding the shortest
path in terms of number of steps).
■ Distance: The cost of moving between two points is the actual distance
(e.g., Euclidean distance, Manhattan distance).
■ Time/Fuel: Cost could represent the time taken or fuel consumed for each
action.
○ Robotics: Energy consumption, time taken, wear and tear on components.
○ Logistics: Transportation costs, delivery time, number of vehicles used.
5. State Space
While not a direct component to be formulated by the user, the state space is an implicit but
crucial concept derived from the above components. It's the set of all reachable states from the
initial state by applying any sequence of actions. It can be visualized as a graph where:
The goal of the search algorithm is to find a path from the initial state node to a goal state node
within this state space.
Dept of CSE
GITAMW 33
Artificial Intelligence
● Enables Search: Provides the necessary inputs for search algorithms to operate. Without
these elements, a search algorithm would have no basis for exploring possibilities.
● Defines Success: The goal test explicitly defines what constitutes a successful solution.
● Facilitates Optimization: Path cost allows for finding the best solution among multiple
possibilities.
● Foundation for Agent Design: A well-formulated problem is the first and most critical
step in designing an effective problem-solving AI agent.
Dept of CSE
GITAMW 34