Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
2 views34 pages

II-I AI Unit I

It is the subject ai artificial intelligence. In the 2 nd year btech branch artificial intelligence

Uploaded by

rehanashaik89777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views34 pages

II-I AI Unit I

It is the subject ai artificial intelligence. In the 2 nd year btech branch artificial intelligence

Uploaded by

rehanashaik89777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Artificial Intelligence

UNIT - I
Introduction: AI problems, foundation of AI and history of AI intelligent agents: Agents
and Environments, the concept of rationality, the nature of environments, structure of
agents, problem solving agents, problem formulation

Foundations of AI

Artificial Intelligence draws upon a diverse set of disciplines, forming its intellectual
bedrock. These foundational areas provide the theoretical frameworks, tools, and
inspirations for building intelligent systems.

1.​ Philosophy: Ancient philosophers like Plato and Aristotle pondered the nature of
knowledge, reasoning, and the mind. Their work on logic, rationality, and the acquisition
of knowledge laid conceptual groundwork. Modern philosophy continues to contribute to
AI by exploring questions of consciousness, ethics, and the implications of AI on
humanity.
2.​ Mathematics and Logic:
○​ Logic: Formal logic, dating back to Aristotle, provides a precise language for
representing knowledge and performing rigorous reasoning. Boolean algebra
(George Boole), propositional logic, and predicate logic are fundamental to
rule-based systems, expert systems, and automated theorem proving.
○​ Probability Theory and Statistics: These are crucial for handling uncertainty,
making predictions, and enabling learning from data. Bayesian inference,
regression, and statistical modeling are at the heart of modern machine learning.
○​ Calculus and Linear Algebra: These mathematical tools are indispensable for
optimization algorithms used in neural networks and other learning models.
3.​ Computer Science: The advent of the programmable digital computer in the 1940s
provided the necessary hardware for AI to move from theory to practice.
○​ Algorithms and Data Structures: These are the building blocks of any AI
program, determining how information is processed and organized.
○​ Computational Theory (Alan Turing): Alan Turing's concept of a "universal
machine" and the "Turing Test" (1950) provided a theoretical model for

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 1
Artificial Intelligence

computation and a criterion for assessing machine intelligence, profoundly


influencing AI's early direction.
4.​ Neuroscience and Cognitive Science: Understanding how the human brain processes
information, learns, and makes decisions has directly inspired AI models.
○​ Artificial Neural Networks (ANNs): These are computational models inspired
by the structure and function of biological neurons, forming the basis of deep
learning.
○​ Cognitive Architectures: Attempts to build AI systems that mimic human
cognitive processes like perception, memory, and problem-solving.
5.​ Linguistics: The study of language is vital for Natural Language Processing (NLP), a
core area of AI that enables machines to understand, interpret, and generate human
language.
6.​ Economics and Control Theory: Concepts from economics, such as utility theory and
decision theory, influence how rational agents make choices. Control theory provides
frameworks for designing systems that can regulate and achieve desired states in dynamic
environments.

History of AI Intelligent Agents

The concept of an "intelligent agent" is relatively central to modern AI. An intelligent agent is
anything that perceives its environment through sensors and acts upon that environment through
effectors, striving to achieve its goals. This agent paradigm has evolved alongside the broader
history of AI.

Early Seeds (Pre-1950s)

●​ Ancient Myths and Automata: The idea of artificial beings with intelligence dates back
to ancient Greek myths and legends. Early mechanical automatons (e.g., in ancient Egypt,
Greece, and later in the Renaissance) embodied the human desire to create machines that
could act autonomously.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 2
Artificial Intelligence

●​ Philosophical and Logical Foundations: As mentioned above, the development of


formal logic and the concept of computation in the early 20th century (e.g., Turing's
work) laid the theoretical groundwork for thinking machines.

The Birth of AI (1950s - 1970s): Symbolic AI and Early Agents

●​ 1950: Alan Turing's "Computing Machinery and Intelligence": This seminal paper
introduced the Turing Test and explored the possibility of machine intelligence,
profoundly influencing early AI.
●​ 1956: The Dartmouth Workshop: Coined the term "Artificial Intelligence." This
workshop brought together pioneers like John McCarthy, Marvin Minsky, Nathaniel
Rochester, and Claude Shannon. The early focus was on symbolic AI, aiming to simulate
human intelligence through logical reasoning, problem-solving, and rule-based systems.
●​ Early Programs and "Agents": While the term "intelligent agent" wasn't explicitly
formalized as it is today, many early AI programs functioned as rudimentary
agents:
○​ Logic Theorist (1956, Newell, Simon, Shaw): Considered one of the first AI
programs, it could prove theorems in propositional logic, acting as an agent to
explore a problem space.
○​ General Problem Solver (GPS, 1957, Newell & Simon): Aimed to solve a wide
range of problems by identifying the difference between the current state and the
goal state and applying operators to reduce that difference. This was a classic
example of a goal-driven, deliberative agent.
○​ ELIZA (1966, Joseph Weizenbaum): A pioneering natural language processing
program that simulated a psychotherapist. While simple (using pattern matching),
it demonstrated how a system could interact with a human user and provide
seemingly intelligent responses, acting as a reactive agent.
○​ Shakey the Robot (1966-1972, SRI International): A groundbreaking mobile
robot that combined perception (vision), planning, and problem-solving. Shakey
could analyze its environment, devise and execute plans to achieve goals, and
reason about its actions. This was a sophisticated early example of a deliberative
agent interacting with a physical environment.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 3
Artificial Intelligence

AI Winters and the Rise of Knowledge-Based Systems (1970s - 1980s)

●​ Expert Systems: During the 1970s and 80s, the focus shifted to "expert systems," which
were knowledge-based AI programs designed to mimic the decision-making ability of a
human expert in a specific domain (e.g., MYCIN for medical diagnosis, XCON for
configuring computer systems). These were essentially complex rule-based agents
operating within narrow, well-defined domains. They showed practical utility but also
exposed limitations in handling uncertainty and common sense.
●​ "AI Winters": Periods of reduced funding and interest in AI due to overly optimistic
predictions and limitations of the technology at the time.

Rebirth and Modern Agent Paradigms (1990s - Present)

●​ Intelligent Agent Paradigm Takes Center Stage: The 1990s saw a renewed focus on
the concept of intelligent agents, often in reaction to the limitations of purely symbolic
AI. Researchers started to classify agents based on their capabilities:
○​ Simple Reflex Agents: Act based on current percepts, ignoring past history (e.g., a
thermostat).
○​ Model-Based Reflex Agents: Maintain an internal state (a model of the world)
based on past percepts, allowing them to operate in partially observable
environments.
○​ Goal-Based Agents: Have explicit goals and plan sequences of actions to achieve
them.
○​ Utility-Based Agents: Maximize their "utility" (a measure of how desirable a state
is) when choosing actions, allowing for more nuanced decision-making in
complex situations with trade-offs.
○​ Learning Agents: Adapt and improve their performance over time by learning
from experience.
●​ Emergence of Machine Learning: The 1990s and 2000s saw a strong resurgence in
machine learning, particularly with statistical methods and the re-emergence of neural
networks (connectionism). This provided agents with the ability to learn from data rather
than being explicitly programmed with rules for every situation. This significantly

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 4
Artificial Intelligence

enhanced their adaptability and performance in complex, dynamic, or uncertain


environments.
●​ Reinforcement Learning (RL): RL became a crucial paradigm for training agents to
make sequential decisions in an environment to maximize a reward signal. This has been
particularly successful in game playing (e.g., DeepMind's AlphaGo, which defeated a Go
world champion).
●​ Multi-Agent Systems (MAS): Research expanded to consider interactions between
multiple intelligent agents, leading to studies on cooperation, competition, and
coordination in complex environments.
●​ Deep Learning and Modern AI Agents (2010s - Present): The advent of deep learning,
fueled by massive datasets and computational power, has revolutionized AI. Modern AI
agents often incorporate deep learning for perception (e.g., computer vision for
self-driving cars), language understanding (e.g., large language models like GPT for
conversational agents), and complex decision-making in highly sophisticated
environments. These agents are more autonomous, adaptable, and capable than their
predecessors

History of AI Agents:

The history of AI intelligent agents can be traced back to the early days of AI research in the
1950s, with the initial focus on symbolic reasoning and rule-based systems. These early systems,
while limited in their capabilities, laid the foundation for the development of more sophisticated
agents that could perceive, reason, and act in their environment. Over time, advancements in
machine learning, particularly deep learning, have enabled AI agents to handle complex tasks
and interact with humans in more natural and adaptive ways

Here's a more detailed look at the evolution:

1. Early AI (1950s-1970s): Rule-Based Systems and Game Playing

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 5
Artificial Intelligence

​ Symbolic Reasoning:​
Early AI focused on representing knowledge and reasoning using symbolic logic and
rules.
​ Game Playing:​
Programs like Samuel's checkers program demonstrated the ability of AI to learn and
improve through self-play and reinforcement learning.
​ ELIZA:​
This chatbot, developed in the mid-1960s, simulated a psychotherapist by using pattern
matching and substitution rules.

2. The Rise of Machine Learning (1980s-2000s)

​ Neural Networks:​
Research into neural networks, inspired by the structure of the human brain, gained
prominence.
​ Expert Systems:​
Rule-based systems were used to create expert systems, which aimed to mimic the
decision-making capabilities of human experts in specific domains
​ Increased Computing Power:​
Advancements in computing power and the availability of large datasets enabled the
development of more complex AI models.

3. Deep Learning and the Modern Era (2010s-Present)

​ Deep Learning Revolution:​


Deep learning, a subset of machine learning, utilizes artificial neural networks with
multiple layers to learn complex patterns and representations from data.
​ Large Language Models (LLMs):​
LLMs, like those powering chatbots and virtual assistants, have demonstrated remarkable
abilities in natural language understanding and generation.
​ AI Agents in Diverse Applications:​
AI agents are now used in various fields, including healthcare, finance, customer service,
and robotics.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 6
Artificial Intelligence

​ Agentic AI:​
The field is moving towards agentic AI, where AI systems can autonomously plan,
execute, and adapt to achieve complex goals

Agents in AI

An AI agent is a software program that can interact with its surroundings, gather information,
and use that information to complete tasks on its own to achieve goals set by humans.

●​ For instance, an AI agent on an online shopping platform can recommend products,


answer customer questions, and process orders. If agent needs more information, it
can ask users for additional details.
●​ AI agents employ advanced natural language processing and machine learning
techniques to understand user input, interact step-by-step, and use external tools when
needed for accurate responses.
●​ Common AI Agent Applications are software development and IT automation, coding
tools, chat assistants, and online shopping platforms.

How do AI Agents Work?


AI agents follow a structured process to perceive, analyze, decide, and act within their
environment. Here’s an overview of how AI agents operate:

1. Collecting Information (Perceiving the Environment)

AI agents gather information from their surroundings through various means:

●​ Sensors: For example, a self-driving car uses cameras and radar to detect objects.
●​ User Input: Chatbots read text or listen to voice commands.
●​ Databases & Documents: Virtual assistants search records or knowledge bases for
relevant data.

2. Processing Information & Making Decisions

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 7
Artificial Intelligence

After gathering data, AI agents analyze it and decide what to do next. Some agents rely on
pre-set rules, while others utilize machine learning to predict the best course of action.
Advanced agents may also use retrieval-augmented generation (RAG) to access external
databases for more accurate responses.

3. Taking Action (Performing Tasks)

Once an agent makes a decision, it performs the required task, such as:

●​ Answering a customer query in a chatbot.


●​ Controlling a device, like a smart assistant turning off lights.
●​ Running automated tasks, such as processing orders on an online store.

4. Learning & Improving Over Time

Some AI agents can learn from past experiences to improve their responses. This self-learning
process, often referred to as reinforcement learning, allows agents to refine their behavior over
time. For example, a recommendation system on a streaming platform learns users' preferences
and suggests content accordingly.

Architecture of AI Agents
The architecture of AI agents serves as the blueprint for how they function.

There are four main components in an AI agent’s architecture:

●​ Profiling Module: This module helps the agent understand its role and purpose. It
gathers information from the environment to form perceptions.​
Example: A self-driving car uses sensors and cameras to detect obstacles.
●​ Memory Module: The memory module enables the agent to store and retrieve past
experiences. This helps the agent learn from prior actions and improve over time.​
Example: A chatbot remembers past conversations to give better responses.
●​ Planning Module: This module is responsible for decision-making. It evaluates
situations, weighs alternatives, and selects the most effective course of action.​
Example: A chess-playing AI plans its moves based on future possibilities.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 8
Artificial Intelligence

●​ Action Module: The action module executes the decisions made by the planning
module in the real world. It translates decisions into real-world actions.​
Example: A robot vacuum moves to clean a designated area after detecting dirt.

AI Agent Classification
An agent is a system designed to perceive its environment, make decisions, and take actions to
achieve specific goals. Agents operate autonomously, without direct human control, and can be
classified based on their behavior, environment, and number of interacting agents.

●​ Reactive Agents respond to immediate stimuli in their environment, making


decisions based on current conditions without planning ahead.
●​ Proactive Agents take initiative, planning actions to achieve long-term goals by
anticipating future conditions.
●​ Fixed Environments have stable rules and conditions, allowing agents to act based
on static knowledge.
●​ Dynamic Environments are constantly changing, requiring agents to adapt and
respond to new situations in real-time.
●​ Single-Agent Systems involve one agent working independently to solve a problem
or achieve a goal.
●​ Multi-Agent Systems involve multiple agents that collaborate, communicate, and
coordinate to achieve a shared objective.
●​ Rational agent is one that chooses actions based on the goal of achieving the best
possible outcome, considering both past and present information.

Key Components of an AI System

●​ An AI system includes the agent, which perceives the environment through sensors
and acts using actuators, and the environment, in which it operates.
●​ AI agents are essential in fields like robotics, gaming, and intelligent systems, where
they use various techniques such as machine learning to enhance decision-making
and adaptability.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 9
Artificial Intelligence

Interaction of
Agents with the Environment
Structure of an AI Agent

The structure of an AI agent is composed of two key components: Architecture and Agent
Program. Understanding these components is essential to grasp how intelligent agents function.

1. Architecture

Architecture refers to the underlying hardware or system on which the agent operates. It is the
"machinery" that enables the agent to perceive and act within its environment. Examples of
architecture include devices equipped with sensors and actuators, such as a robotic car,
camera, or a PC. These physical components enable the agent to gather sensory input and
execute actions in the world.

2. Agent Program

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 10
Artificial Intelligence

Agent Program is the software component that defines the agent's behavior. It implements the
agent function, which is a mapping from the agent's percept sequence (the history of all
perceptions it has gathered so far) to its actions. The agent function determines how the agent
will respond to different inputs it receives from its environment.

Agent = Architecture + Agent Program

The overall structure of an AI agent can be understood as a combination of both the architecture
and the agent program. The architecture provides the physical infrastructure, while the agent
program dictates the decision-making and actions of the agent based on its perceptual inputs.​

Characteristics of an Agent
Types of Agents

1. Simple Reflex Agents

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 11
Artificial Intelligence

Simple reflex agents act solely based on the current percept and, percept history (record of
past perceptions) is ignored by these agents. Agent function is defined by condition-action
rules.

A condition-action rule maps a state (condition) to an action.

●​ If the condition is true, the associated action is performed.


●​ If the condition is false, no action is taken.

Simple reflex agents are effective in environments that are fully observable (where the current
percept gives all needed information about the environment). In partially observable
environments, simple reflex agents may encounter infinite loops because they do not consider
the history of previous percepts. Infinite loops might be avoided if the agent can randomize its
actions, introducing some variability in its behavior.

Simple Reflex Agents

2. Model-Based Reflex Agents

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 12
Artificial Intelligence

Model-based reflex agents finds a rule whose condition matches the current situation or percept.
It uses a model of the world to handle situations where the environment is only partially
observable.

●​ The agent tracks its internal state, which is adjusted based on each new percept.
●​ The internal state depends on the percept history (the history of what the agent has
perceived so far).

The agent stores the current state internally, maintaining a structure that represents the parts of
the world that cannot be directly seen or perceived. The process of updating the agent’s state
requires information about:

●​ How the world evolves independently from the agent?


●​ How the agent's actions affect the world?

Model-Based Reflex Agents

3. Goal-Based Agents

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 13
Artificial Intelligence

Goal-based agents make decisions based on their current distance from the goal and every action
the agent aims to reduce the distance from goal. They can choose from multiple possibilities,
selecting the one that best leads to the goal state.

●​ Knowledge that supports the agent's decisions is represented explicitly, meaning it's
clear and structured. It can also be modified, allowing for adaptability.
●​ The ability to modify the knowledge makes these agents more flexible in different
environments or situations.

Goal-based agents typically require search and planning to determine the best course of action.

Goal-Based Agents

4. Utility-Based Agents

Utility-based agents are designed to make decisions that optimize their performance by
evaluating the preferences (or utilities) for each possible state. These agents assess multiple

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 14
Artificial Intelligence

alternatives and choose the one that maximizes their utility, which is a measure of how desirable
or "happy" a state is for the agent.

●​ Achieving the goal is not always sufficient; for example, the agent might prefer a
quicker, safer, or cheaper way to reach a destination.
●​ The utility function is essential for capturing this concept, mapping each state to a
real number that reflects the agent’s happiness or satisfaction with that state.

Since the world is often uncertain, utility-based agents choose actions that maximize expected
utility, ensuring they make the most favorable decision under uncertain conditions.

Utility-Based Agents

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 15
Artificial Intelligence

5. Learning Agent

A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then is able to act and adapt
automatically through learning. A learning agent has mainly four conceptual components, which
are:

1.​ Learning element: It is responsible for making improvements by learning from the
environment.
2.​ Critic: The learning element takes feedback from critics which describes how well
the agent is doing with respect to a fixed performance standard.
3.​ Performance element: It is responsible for selecting external action.

4.​ Problem Generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.

Learning Agent

6. Multi-Agent Systems

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 16
Artificial Intelligence

Multi-Agent Systems (MAS) consists of multiple interacting agents working together to achieve
a common goal. These agents can be autonomous or semi-autonomous, capable of perceiving
their environment, making decisions, and taking action.

MAS can be classified into:

●​ Homogeneous MAS: Agents have the same capabilities, goals, and behaviors.
●​ Heterogeneous MAS: Agents have different capabilities, goals, and behaviors,
leading to more complex but flexible systems.
●​ Cooperative MAS: Agents work together to achieve a common goal.
●​ Competitive MAS: Agents work against each other for their own goals.

MAS can be implemented using game theory, machine learning, and agent-based modeling.

7. Hierarchical Agents

Hierarchical Agents are organized into a hierarchy, with high-level agents overseeing the
behavior of lower-level agents. The high-level agents provide goals and constraints, while the
low-level agents carry out specific tasks. They are useful in complex environments with many
tasks and sub-tasks.

This structure is beneficial in complex systems with many tasks and sub-tasks, such as robotics,
manufacturing, and transportation. Hierarchical agents allow for efficient decision-making and
resource allocation, improving system performance. In such systems, high-level agents set goals,
and low-level agents execute tasks to achieve those goals.

Uses of Agents
Agents are used in a wide range of applications in artificial intelligence, including:

●​ Robotics: Agents can be used to control robots and automate tasks in manufacturing,
transportation, and other industries.
●​ Smart homes and buildings: Agents can be used to control heating, lighting, and
other systems in smart homes and buildings, optimizing energy use and improving
comfort.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 17
Artificial Intelligence

●​ Transportation systems: Agents can be used to manage traffic flow, optimize routes
for autonomous vehicles, and improve logistics and supply chain management.
●​ Healthcare: Agents can be used to monitor patients, provide personalized treatment
plans, and optimize healthcare resource allocation.
●​ Finance: Agents can be used for automated trading, fraud detection, and risk
management in the financial industry.
●​ Games: Agents can be used to create intelligent opponents in games and simulations,
providing a more challenging and realistic experience for players.

Overall, agents are a versatile and powerful tool in artificial intelligence that can help solve a
wide range of problems in different fields.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 18
Artificial Intelligence

Rationality in Artificial Intelligence (AI)


Artificial Intelligence (AI) has rapidly advanced in recent years, transforming industries and
reshaping the way we live and work. One of the core aspects of AI is its ability to make decisions
and solve problems. This capability hinges on the concept of rationality. But what does
rationality mean in the context of AI, and how is it achieved?
A rational agent, often known as a rational being, is a person or entity to does the best actions
possible given the circumstances and information at hand. A rational agent can be any
decision-making entity, such as a person, corporation, machine, or software.

What is a Rational Agent in AI?


A rational agent has a clear preference, models uncertainty, and acts in a way that maximizes its
performance measure using all available actions. A rational agent is said to do the appropriate
thing. AI is about developing rational agents for use in game theory and decision theory in
various real-world contexts.

Rational action is crucial for an AI agent since the AI reinforcement learning algorithm rewards
the best possible action with a positive reward and penalizes the worst possible action with a
negative reward. A rational AI agent is a system that performs actions to obtain the best possible
outcome or, in the case of uncertainty, the best-expected outcome.

Introduction to Rationality in AI
Rationality in AI refers to the ability of an artificial agent to make decisions that maximize its
performance based on the information it has and the goals it seeks to achieve. In essence, a
rational AI system aims to choose the best possible action from a set of alternatives to achieve a
specific objective. This involves logical reasoning, learning from experiences, and adapting to
new situations.

Types of Rationality
There are two primary types of rationality in AI: bounded rationality and perfect rationality.

1. Bounded Rationality

Bounded rationality recognizes that decision-making capabilities are limited by the information
available, cognitive limitations, and time constraints. AI systems operating under bounded

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 19
Artificial Intelligence

rationality use heuristics (mental shortcuts or rules of thumb) and approximations to make
decisions that are good enough, rather than optimal. This approach is practical in real-world
applications where perfect information and infinite computational resources are unavailable.

2. Perfect Rationality

Perfect rationality assumes that an AI system has access to complete information, unlimited
computational power, and infinite time to make decisions. While this is an idealized concept, it
serves as a benchmark for evaluating the performance of AI systems. Perfectly rational AI would
always make the best possible decision in any given situation.

How Does a Rational Agent Work?


●​ A rational agent is mainly used for goal-oriented agents. It evaluates its environment
by examining what it is like.
●​ It then examines each available action in its armory to see how it will alter the
surroundings and assist it achieve its goal.
●​ It tests all possible steps before selecting the best one, the one that would bring it
closest to its goal.
●​ Rational agents are also used in self-driving cars, energy-saving air-conditioning
units, automated lights, and other devices that require environmental data to choose
the optimal course of action.

Criteria for Measuring Rationality


●​ Rationality is crucial for solving many problems on both a local and global scale.
This is typically founded on the belief that rationality is required to act efficiently and
achieve a variety of goals.
●​ This includes goals from several domains, such as ethics, humanism, science, and
religion.
●​ The study of rationality dates back to ancient Greece and has captivated many of the
finest minds.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 20
Artificial Intelligence

●​ This interest is frequently fueled by a desire to understand our minds' potential and
limitations. Various theories even regard rationality as the essence of being human,
frequently to distinguish humans from other species.

Achieving Rationality in AI
Implementing rationality in AI involves several techniques and approaches:

1. Decision Theory

Decision theory provides a framework for making rational choices by evaluating the potential
outcomes of different actions. It combines probability theory and utility theory to calculate the
expected utility of each action and select the one with the highest expected utility. This approach
is widely used in AI for planning and problem-solving tasks.

2. Game Theory

Game theory studies strategic interactions between agents, where the outcome for each
participant depends on the actions of others. In AI, game theory is used to model and analyze
competitive and cooperative scenarios, enabling agents to make rational decisions in multi-agent
environments.

3. Machine Learning

Machine learning algorithms enable AI systems to learn from data and improve their
decision-making over time. By identifying patterns and relationships in data, machine learning
models can make more accurate predictions and choose actions that maximize performance.

4. Logic and Reasoning

Logical reasoning involves using formal rules and knowledge representations to infer new
information and make decisions. Techniques such as propositional logic, predicate logic, and

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 21
Artificial Intelligence

Bayesian networks help AI systems reason about the world and make rational decisions based on
the knowledge they possess.

5. Reinforcement Learning

Reinforcement learning is a type of machine learning where an agent learns to make decisions by
interacting with an environment. The agent receives feedback in the form of rewards or penalties
and uses this feedback to learn a policy that maximizes cumulative rewards. This approach is
particularly effective for sequential decision-making problems.

Challenges in Achieving Rationality


While the goal of achieving rationality in AI is desirable, it comes with several challenges:

1. Uncertainty and Incomplete Information

In many real-world situations, AI systems must make decisions with incomplete or uncertain
information. Handling uncertainty effectively requires sophisticated probabilistic reasoning and
robust learning algorithms.

2. Computational Complexity

Optimal decision-making often involves searching through a vast space of possible actions and
outcomes. This can be computationally expensive, especially for complex problems. Balancing
the trade-off between computational resources and decision quality is a key challenge.

3. Ethical and Social Considerations

Rational decisions made by AI systems can have significant ethical and social implications.
Ensuring that AI systems align with human values and do not cause harm requires careful
consideration of ethical principles and societal impact.

Applications of Rational AI
Rational AI has numerous applications across various domains:

1. Autonomous Vehicles
Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 22
Artificial Intelligence

Autonomous vehicles use rational decision-making to navigate safely, avoid obstacles, and
optimize routes. They must make real-time decisions based on sensor data and changing traffic
conditions.

2. Healthcare

In healthcare, AI systems assist in diagnosis, treatment planning, and personalized medicine.


Rational decision-making helps optimize patient outcomes by considering medical history,
current symptoms, and treatment options.

3. Finance

AI is used in finance for algorithmic trading, risk management, and fraud detection. Rational AI
systems analyze market data, predict trends, and make investment decisions to maximize returns.

4. Robotics

Robots in manufacturing, logistics, and service industries rely on rational decision-making to


perform tasks efficiently and adapt to dynamic environments.

Rationality in Decision-Making
●​ Research and brainstorm possible solutions: The probability of addressing your
problem increases when you broaden your pool of prospective solutions. To identify
as many feasible answers as you can, you should research your problem thoroughly
using both the internet and your knowledge. To come up with more ideas,
brainstorming with others is another option.
●​ Set standards of success and failure for your potential solutions: Setting a
threshold for measuring the success and failure of your ideas allows you to identify
which ones will genuinely address your problem. However, your expectations of
success should not be unrealistic. You'd be unable to discover a solution. However, if
your standards are practical, quantitative, and targeted, you will be able to identify
one.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 23
Artificial Intelligence

●​ Describe the potential outcomes of each solution: The next step is to determine the
repercussions(consequences) of each of your solutions. Create a table of each
alternative's strengths and shortcomings and compare them. You should also prioritize
your solutions in a list, from greatest to worst possibility of solving the problem.
●​ Choose the best solution and test it: Choose the best solution and test it after
evaluating all of your options. You can also begin to monitor your early results at this
stage.
●​ Implement the solution or try a new one: If your suggested answer passed your test
and solved your problem, it is the most logical choice you can make. You should use
it to fix your current problem as well as any future ones that arise. If the answer did
not fix your problem, try another possible solution that you came up with.

Nature of Environments:
In artificial intelligence (AI), the "nature of the environment" refers to the characteristics of the
environment in which an AI agent operates. These characteristics influence how the agent
perceives, interacts with, and learns from its surroundings. Key aspects include whether the
environment is fully or partially observable, deterministic or stochastic, static or dynamic, and
discrete or continuous

Here's a breakdown of these key aspects:

1. Fully Observable vs. Partially Observable:

​ Fully Observable:​
The agent has complete access to all relevant information about the environment at any
given time. For example, in a game of chess, all pieces and their positions are visible

Partially Observable:
The agent only has access to partial information about the environment, making it more
challenging to make decisions. Driving in fog, where visibility is limited, is an example

2. Deterministic vs. Stochastic:

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 24
Artificial Intelligence

​ Deterministic:​
The outcome of an action is always predictable based on the agent's input. For example,
in a game of tic-tac-toe, each move has a predictable outcome.
​ Stochastic:​
The outcome of an action is uncertain and can be influenced by random factors.Playing
poker, where the cards dealt are random, is an example.

3. Static vs. Dynamic:

​ Static:​
The environment doesn't change over time unless the agent takes an action. A crossword
puzzle is an example.
​ Dynamic:​
The environment can change independently of the agent's actions, requiring the agent to
adapt to the evolving situation.

4. Discrete vs. Continuous:

​ Discrete:​
The environment has a finite and countable number of states or actions. Turn-based
games like checkers are examples.
​ Continuous:​
The environment has an infinite number of states or actions. Controlling the throttle of a
car is an example

5. Other important aspects:

​ Episodic vs. Sequential:​


In episodic(containing or consisting of a series of separate parts or events) environments,
each action is independent of past actions, while in sequential environments, past actions
influence future outcomes.
​ Single-agent vs. Multi-agent:​
Single-agent environments involve one AI agent, while multi-agent environments involve
multiple agents interacting with each other.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 25
Artificial Intelligence

​ Competitive vs. Collaborative:​


In competitive environments, agents strive to outperform each other, while in
collaborative environments, agents work together to achieve a common goal.
​ Known vs. Unknown:​
In known environments, the AI has prior knowledge of the rules and possible actions.In
unknown environments, the AI must learn through trial and error

The nature of the environment significantly impacts the design and development of AI agents,
influencing their perception, decision-making, and learning capabilities. Understanding these
properties is crucial for creating effective and efficient AI systems

Problem Solving Agents:

A problem-solving agent is an Artificial Intelligence (AI) entity designed to find a sequence of


actions that leads from an initial state to a desired goal state. These agents are fundamental to
many AI applications, from game playing and robotics to logistics and medical diagnosis. This
document explores the core components, methodologies, and challenges associated with
problem-solving agents.

I. Core Components of a Problem-Solving Agent

Every problem-solving agent relies on several key components to function effectively:

●​ 1. Goal Formulation: The first step is to clearly define the goal the agent is trying to
achieve. This involves specifying the desired state of the world that the agent aims to
reach. For example, in a navigation problem, the goal might be "arrive at destination X."
●​ 2. Problem Formulation: Once the goal is set, the agent needs to formulate a problem
that can be solved. This involves:
○​ Initial State: The current state of the world from which the agent starts.
○​ Actions (Operators): A set of possible actions the agent can perform. Each
action has preconditions (what must be true for the action to be taken) and effects
(how the action changes the state of the world).

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 26
Artificial Intelligence

○​ Transition Model: A description of what state results from performing any action
in any state.
○​ Goal Test: A function that determines whether a given state is a goal state.
○​ Path Cost: A function that assigns a numerical cost to each path (sequence of
actions). This is often used to find the most efficient solution.
●​ 3. Search Algorithm: The heart of a problem-solving agent is its search algorithm. This
algorithm explores the state space (the set of all possible states reachable from the initial
state) to find a path from the initial state to a goal state. Different algorithms have varying
strengths and weaknesses in terms of completeness (guaranteed to find a solution if one
exists), optimality (guaranteed to find the best solution), time complexity, and space
complexity.
●​ 4. Solution Execution: Once a solution (a sequence of actions) is found by the search
algorithm, the agent needs to execute these actions in the environment. This may involve
interacting with the real world, which introduces challenges such as uncertainty and
partial observability.

II. Problem-Solving Methodologies (Search Strategies)

Problem-solving agents employ various search strategies, broadly categorized into uninformed
and informed search:

A. Uninformed Search Strategies (Blind Search)

These strategies do not use any domain-specific knowledge beyond the problem formulation.
They explore the state space systematically.

●​ 1. Breadth-First Search (BFS):


○​ How it works: Explores all nodes at the current depth level before moving to the
next level. It uses a FIFO (First-In, First-Out) queue.
○​ Completeness: Yes, if the branching factor is finite.
○​ Optimality: Yes, if the path cost is uniform (each step has the same cost).
○​ Disadvantage: Can be very memory-intensive as it stores all nodes at the current
depth.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 27
Artificial Intelligence

●​ 2. Depth-First Search (DFS):


○​ How it works: Explores as far as possible along each branch before backtracking.
It uses a LIFO (Last-In, First-Out) stack.
○​ Completeness: No, can get stuck in infinite paths.
○​ Optimality: No.
○​ Advantage: Much lower memory requirements than BFS.
●​ 3. Depth-Limited Search (DLS):
○​ How it works: DFS with a predefined depth limit to avoid infinite paths.
○​ Completeness: Yes, if a solution exists within the depth limit.
○​ Optimality: No.
●​ 4. Iterative Deepening Depth-First Search (IDDFS):
○​ How it works: Repeatedly applies DLS with increasing depth limits (0, 1, 2, ...).
○​ Completeness: Yes.
○​ Optimality: Yes, if the path cost is uniform.
○​ Advantage: Combines the memory efficiency of DFS with the completeness and
optimality of BFS (for uniform cost).
●​ 5. Uniform-Cost Search (UCS):
○​ How it works: Expands the node with the lowest path cost from the initial state.
It uses a priority queue.
○​ Completeness: Yes.
○​ Optimality: Yes, for any non-negative step costs.
○​ Disadvantage: Can be slow if there are many actions with the same low cost.

B. Informed Search Strategies (Heuristic Search)

These strategies use problem-specific knowledge (heuristics) to guide the search, making them
more efficient than uninformed methods. A heuristic function h(n) estimates the cost of the
cheapest path from node n to a goal state.

●​ 1. Greedy Best-First Search:


○​ How it works: Expands the node that appears closest to the goal, as estimated by
h(n). It uses a priority queue.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 28
Artificial Intelligence

○​ Completeness: No, can get stuck in local optima.


○​ Optimality: No.
○​ Advantage: Often finds solutions quickly.
●​ 2. A* Search:
○​ How it works: Combines UCS and Greedy Best-First Search. It expands the node
n with the lowest value of f(n)=g(n)+h(n), where g(n) is the cost from the initial
state to node n, and h(n) is the estimated cost from node n to the goal.
○​ Completeness: Yes, if h(n) is admissible (never overestimates the true cost to the
goal) and the branching factor is finite.
○​ Optimality: Yes, if h(n) is admissible and consistent (for every node n and every
successor n′ of n with step cost c(n,n′), h(n)≤c(n,n′)+h(n′)).
○​ Advantage: Widely used and highly effective due to its balance of completeness,
optimality, and efficiency.
●​ 3. Local Search Algorithms:
○​ How it works: Operate on a single current state and move to a neighboring state
if it improves the objective function (e.g., hill climbing, simulated annealing,
genetic algorithms). They are useful for optimization problems where the path to
the solution is not important, only the final state.
○​ Completeness/Optimality: Generally not guaranteed, but can find good solutions
for very large state spaces.

III. Challenges in Problem Solving

Problem-solving agents face several significant challenges:

●​ 1. State Space Explosion: The number of possible states can grow exponentially with
the number of variables, making exhaustive search computationally infeasible for many
real-world problems.
●​ 2. Heuristic Quality: The performance of informed search heavily depends on the
quality of the heuristic function. Designing effective and admissible/consistent heuristics
can be challenging.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 29
Artificial Intelligence

●​ 3. Uncertainty and Partial Observability: In real-world environments, the agent may


not have complete or accurate information about the current state, and actions may have
uncertain outcomes. This requires more sophisticated approaches like planning under
uncertainty.
●​ 4. Dynamic Environments: If the environment changes while the agent is planning or
executing actions, the pre-computed plan may become invalid, requiring replanning.
●​ 5. Multiple Agents: When multiple agents are involved, their actions can interfere with
each other, leading to complex coordination problems.

IV. Applications of Problem-Solving Agents

Problem-solving agents are applied in a wide range of domains:

●​ Game Playing: Chess, Go, Sudoku, puzzles.


●​ Robotics: Path planning, navigation, task execution.
●​ Logistics and Supply Chain Management: Route optimization, scheduling, resource
allocation.
●​ Medical Diagnosis: Finding sequences of tests and treatments.
●​ Automated Planning: Generating plans for complex tasks in various domains.
●​ Circuit Design: Finding optimal configurations for electronic circuits.

Problem Formulation in AI:

A problem-solving agent in AI needs to formulate a problem clearly before it can even begin to
search for a solution. Problem formulation is the process of precisely defining all the elements
necessary for an AI agent to understand what it needs to achieve and how it can do it. Without a
well-formulated problem, an agent wouldn't know its starting point, its destination, or the
permissible moves in between.

Here are the key components of problem formulation in AI:

1. Initial State

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 30
Artificial Intelligence

The initial state is the starting point of the agent. It's a complete and unambiguous description of
the world or the environment at the moment the problem-solving process begins.

●​ Characteristics:
○​ Precise: It must clearly define all relevant aspects of the environment.
○​ Unambiguous: There should be no room for misinterpretation.
○​ Representable: It must be possible to represent this state within the agent's
internal data structures (e.g., as a set of facts, a configuration of variables, a
specific arrangement of objects).
●​ Examples:
○​ Chess: The initial state is the standard chessboard configuration at the start of a
game.
○​ Pathfinding: The initial state is the agent's current location on a map.
○​ 8-Puzzle: The initial state is a specific arrangement of tiles on the 3x3 grid.
○​ Robot Navigation: The robot's current coordinates, orientation, and the state of
its sensors.

2. Actions (Operators)

The actions (also known as operators or moves) are the set of possible actions that the agent can
perform. Each action transforms the current state into a new state. For each action, we need to
define:

●​ Preconditions: What must be true in the current state for this action to be applicable. If
the preconditions are not met, the action cannot be performed.
●​ Effects (Transition Model): How the action changes the state of the world. This
describes the resulting state after the action is executed. The collection of all actions and
their effects defines the transition model of the environment, which specifies what state
results from performing any action in any state.
●​ Examples:
○​ Chess: Actions include "Move pawn from e2 to e4," "Move knight from b1 to
c3," etc. Preconditions might include "e2 must contain a pawn," "e4 must be

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 31
Artificial Intelligence

empty or contain an opponent's piece." Effects would be changing the positions of


pieces.
○​ Pathfinding: Actions could be "Move North," "Move South," "Move East,"
"Move West." Preconditions might be "There is no obstacle in the way." Effects
would be changing the agent's coordinates.
○​ 8-Puzzle: Actions include "Move blank tile up," "Move blank tile down," "Move
blank tile left," "Move blank tile right." Preconditions depend on the blank tile's
current position (e.g., cannot move up if the blank is in the top row). Effects
involve swapping the blank tile with an adjacent tile.
○​ Robot Navigation: Actions like "Move forward 1 meter," "Turn left 90 degrees."
Preconditions could be "No obstacle in the path." Effects would update the robot's
position and orientation.

3. Goal Test

The goal test is a function that determines whether a given state is a goal state. It takes a state as
input and returns true if the state satisfies the goal criteria, and false otherwise.

●​ Characteristics:
○​ Clear and Concise: It should be easy to verify if a state is a goal state.
○​ Binary Output: Returns either true or false.
●​ Examples:
○​ Chess: Is the opponent's King in checkmate?
○​ Pathfinding: Is the agent at the specified destination coordinates?
○​ 8-Puzzle: Are the tiles arranged in the desired order (e.g.,
1-2-3-4-5-6-7-8-blank)?
○​ Robot Navigation: Has the robot reached the target location and completed all
required sub-tasks?

4. Path Cost

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 32
Artificial Intelligence

The path cost (or step cost) is a function that assigns a numerical cost to each path (sequence of
actions). This is particularly important when there are multiple paths to a goal, and the agent
needs to find the optimal or most efficientsolution.

●​ Characteristics:
○​ Additive: The cost of a path is typically the sum of the costs of the individual
actions along that path.
○​ Non-negative: Action costs are usually non-negative.
●​ Examples:
○​ Pathfinding:
■​ Uniform Cost: Each step has a cost of 1 (useful for finding the shortest
path in terms of number of steps).
■​ Distance: The cost of moving between two points is the actual distance
(e.g., Euclidean distance, Manhattan distance).
■​ Time/Fuel: Cost could represent the time taken or fuel consumed for each
action.
○​ Robotics: Energy consumption, time taken, wear and tear on components.
○​ Logistics: Transportation costs, delivery time, number of vehicles used.

5. State Space

While not a direct component to be formulated by the user, the state space is an implicit but
crucial concept derived from the above components. It's the set of all reachable states from the
initial state by applying any sequence of actions. It can be visualized as a graph where:

●​ Nodes: Represent individual states.


●​ Edges: Represent actions that connect one state to another.

The goal of the search algorithm is to find a path from the initial state node to a goal state node
within this state space.

Why is Problem Formulation Crucial?

●​ Clarity and Precision: It forces a clear understanding of the problem.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 33
Artificial Intelligence

●​ Enables Search: Provides the necessary inputs for search algorithms to operate. Without
these elements, a search algorithm would have no basis for exploring possibilities.
●​ Defines Success: The goal test explicitly defines what constitutes a successful solution.
●​ Facilitates Optimization: Path cost allows for finding the best solution among multiple
possibilities.
●​ Foundation for Agent Design: A well-formulated problem is the first and most critical
step in designing an effective problem-solving AI agent.

Dept of CSE
GITAMW​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ 34

You might also like