Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
38 views13 pages

AI Unit 4 Notes

The document outlines various types of intelligent agents, including logic-based, reactive, belief-desire-intention, and layered architecture agents, along with their components such as perception, environment, actuators, reasoning, learning, and performance measures. It also discusses agent communication types, including informing, requesting, commanding, negotiating, and cooperative communication, as well as languages used in agent communication like KQML and FIPA-ACL. Additionally, it highlights the importance of bargaining, augmentation, trust, and reputation in multi-agent systems for conflict resolution and collaboration.

Uploaded by

Deepak Tyagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views13 pages

AI Unit 4 Notes

The document outlines various types of intelligent agents, including logic-based, reactive, belief-desire-intention, and layered architecture agents, along with their components such as perception, environment, actuators, reasoning, learning, and performance measures. It also discusses agent communication types, including informing, requesting, commanding, negotiating, and cooperative communication, as well as languages used in agent communication like KQML and FIPA-ACL. Additionally, it highlights the importance of bargaining, augmentation, trust, and reputation in multi-agent systems for conflict resolution and collaboration.

Uploaded by

Deepak Tyagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Unit 4

# Intelligent Agent
Unit 1 se iski definition,types, characteristics
 Categories Of Agents on the basis of
Architecture
1. Logic-Based Agents:
 Logic-based agents make decisions by reasoning and drawing
logical conclusions.
 They use formal logic to represent the knowledge about the
world and use logical deduction to determine the best course of
action.
2. Reactive Agents:
Reactive agents, unlike logic- based agents, make decisions
based on a set of predefined responses or procedures.
They don't rely on reasoning or planning; instead, they directly
map specific situations to actions using simple rules or behaviors.
These agents respond to the current state of the environment
without storing past experiences or maintaining a model of the
world.
3. Belief-desire-intention agents:
 These agents carries out decision making that depends upon
the manipulation of data structures which are used to
represent the agent's beliefs, agent's desires, and agent's
intentions.
4. Layered architectures:
Here the decision making is done through various software layers,
each of which explicitly reasons about the environment at
different levels of abstraction as per the requirement of problem
under consideration.
# Structure of Logic Based Agent(Unit 1 ka diagram use hoga)
1. Perception (Sensors)
 What is it?
 The perception component is responsible for allowing the agent
to sense its environment and gather data about the world around
it.
 It collects inputs from various sensors that detect different aspects
of the environment. Perception helps the agent understand the
current state of its surroundings, which is essential for making
decisions.
 How does it work?
 Sensors can be physical (like cameras,microphones, or temperature
sensors) or digital (like data inputs from software systems).
 The information they gather is used by the agent to form a mental
model or understanding of the environment.
 Example:
 Imagine a robot equipped with cameras and microphones. The
robot uses these sensors to "see" its surroundings and "hear"
sounds, which allows it to navigate through a room or interact
with people.

2. Environment
 What is it?
 The environment is the space or world where the agent operates.
This could be a real-world environment or a virtual one.
 The environment provides input to the agent through its sensors
and also receives output from the agent through its actuators.
 The environment can change over time, requiring the agent to
constantly adapt its actions to achieve its goals.
 How does it work?
 The environment can be dynamic, meaning that it changes based
on the agent's actions or external factors.
 The agent must continuously monitor its environment to adapt to
these changes and make informed decisions.
 In a virtual environment, the changes may be programmed or
influenced by other virtual agents.
 Example:
 For a self-driving car, the environment includes roads, traffic
lights, pedestrians, and other cars.

3. Actuators
 What is it?
 Actuators are the components that allow the agent to take actions
based on the decisions made by the reasoning or processing unit.
 Once the agent has analyzed its environment and determined what
action to take, the actuators help the agent carry out those actions
in the real world or within a virtual environment.
 How does it work?
 Actuators can take many forms, depending on the type of agent
and the tasks it needs to perform.
 They convert the agent’s internal decisions (processed by the
reasoning unit) into physical or virtual actions that affect the
environment.
 Example:
 A robot might have motors as actuators, allowing it to move
forward, turn, or lift objects. A camera could be used as an
actuator to capture images, or a drone might have rotors that
allow it to fly.

4. Reasoning (Processing or Decision-Making)


 What is it?
 The reasoning component is responsible for analyzing the
information gathered by the agent's sensors and deciding on the
best course of action.
 It processes the incoming data, evaluates different possibilities, and
uses algorithms or decision-making strategies to select the most
appropriate action.
 How does it work?
 This component often uses a set of predefined rules or models
(like decision trees, neural networks, or logical reasoning) to
process the input and make decisions.
 The reasoning component might need to make judgments about
the future consequences of its actions, weigh trade-offs, or plan
sequences of actions to achieve the agent's goals.
 Example:
 A chess-playing AI uses reasoning to decide its next move by
evaluating the current board and considering all possible moves,
their consequences, and strategies.

5. Learning (Optional)
 What is it?
 Some intelligent agents have the ability to learn from their
experiences.
 This means that over time, they can improve their performance
by analyzing the results of their actions and adapting their
decision-making process accordingly.
 Learning helps the agent adapt to changes in the environment or
unexpected situations.

 How does it work?


 Learning in agents can be either supervised (where the agent is
trained with labeled data) or unsupervised (where the agent
identifies patterns from unstructured data).
 The learning process might involve reinforcement learning, where
the agent receives feedback in the form of rewards or penalties for
its actions.
 Example:
 A voice assistant like Siri or Alexa learns over time by remembering
users' preferences, improving its ability to understand and
respond to voice commands.

6. Performance Measure
 What is it?
 The performance measure is a metric or set of criteria used to
evaluate how well the agent is achieving its goals.
 It assesses the effectiveness of the agent's actions and determines
whether it is on track to complete its tasks successfully.
 Performance measures guide the agent in adjusting its behavior if
necessary.
 How does it work?
 The performance measure helps define success for the agent.
 For example, it might evaluate how quickly or accurately a robot
performs a task or how effectively a self-driving car navigates
through traffic.
Agent Communication
 Agent communication refers to the process by which intelligent agents
exchange information and coordinate actions with other agents within a
system.
 In multi-agent systems (MAS), agents may need to communicate with
one another to achieve their individual or collective goals.

This communication is essential for collaboration, negotiation, and


sharing of knowledge among agents, especially when the tasks are too
complex or require multiple agents to work together.

# Types of Agent Communication


1. Informing Communication
 Description: In this type, one agent simply provides information to
another agent without expecting any response. The receiver of the
message doesn’t need to act on it immediately but might use the
information later.

Example: A robot in a factory might inform other robots about the


location of a newly placed object or an updated task status.

2. Requesting Communication
 Description: One agent asks another agent for information or an
action. It is an interactive form of communication where the
requesting agent waits for a response or a result from the receiving
agent.
 Example: In a delivery system, an agent might request the status of an
order from another agent to check if it’s ready for dispatch.

3. Commanding Communication
 Description: A commanding agent sends an instruction or order to
another agent, directing it to perform a specific action or task. The
agent receiving the command is expected to act on it.
 Example: A robot might command another robot to pick up an item or
move to a specific location in a warehouse.
4. Negotiating Communication
 Description: Agents communicate to come to an agreement on a
particular issue, such as deciding on resource allocation or resolving
conflicting goals. This communication is often a back-and- forth
exchange of offers, counteroffers, or terms.
 Example: Two agents in a negotiation system might exchange offers
about the price of a product,aiming to settle on an acceptable price for
both parties.

5. Cooperative Communication
 Description: In this type, agents work together to achieve a shared
goal. They share information, resources, or tasks to ensure the
success of a collective objective.
 Example: In a cooperative robot team, each robot might share its
observations and actions with others to optimize their collective
performance, like in a search-and-rescue mission where robots share
data about the environment.
Types of Languages in Agent Communication
1. KQML (Knowledge Query and Manipulation Language)
 KQML is a communication language designed specifically for
exchanging information between intelligent agents.
o It is a language for the exchange of knowledge, and it
includes a set of performatives (or speech acts) that define
the type of communication an agent is engaged in, such as
querying, requesting, replying, or asserting.
 Features:
o It uses a predefined set of communication "acts" like asking
questions (QUERY), requesting actions (REQUEST), and
stating facts (ASSERT).
o KQML is based on a message-passing model, meaning one
agent sends a message to another agent.
o It allows agents to communicate in a formal, structured
manner, ensuring clear understanding between agents.
 Example:
o An agent might send a message like: (ask (agent1) :content
(isWeatherSunny?)) which means "Agent 1 is asked if the
weather is sunny."

2. FIPA-ACL (Foundation for Intelligent Physical Agents -


Agent Communication Language)
 FIPA-ACL is another popular agent communication language that was
developed by the FIPA (Foundation for Intelligent Physical Agents)
organization that allows agents to exchange information in a multi-
agent system (MAS).
o It is widely used in multi-agent systems to enable agents to
communicate with one another in a standardized way,
ensuring that agents from different platforms or with different
architectures can work together seamlessly.
 Features:
o FIPA-ACL provides a more standardized and formal approach
to agent communication, ensuring interoperability between
agents in different systems.
o It defines a set of communication protocols, like request-
response, inform, or negotiate, and allows agents to specify
what kind of message they are sending.
o It is built on speech act theory, which classifies messages
based on their communicative intent, such as requesting,
informing, or offering.
Que. What Role does bargaining play in
Resolvoing conflicts and reaching
agreement among intelligent agent?
Ans.
 Bargaining plays a crucial role in resolving conflicts and reaching
agreements among intelligent agents in multi-agent systems.
 It enables agents to negotiate and collaborate to find mutually
acceptable solutions in situations where there is a conflict of interests
or resources.
 Here's how bargaining contributes to the process:
1. Conflict Resolution
 In multi-agent systems, agents often have conflicting goals, desires, or
resources. Bargaining allows agents to find a middle ground where each
agent can adjust their expectations or goals to reach a solution that
satisfies the most important aspects of all parties involved. Through
negotiation, agents can:
 Compromise:
 Prioritize Goals: Agents can focus on their most important goals
while being flexible on others,
 Resolve Resource Allocation Issues: In scenarios where resources
(e.g., time, space, or computational power) are limited, bargaining
helps agents allocate resources fairly, ensuring that no agent feels
disadvantaged.
2. Maximizing Utility
 Bargaining allows agents to maximize their utility or satisfaction. Each
agent aims to achieve the best possible outcome given the situation,
and bargaining helps find a solution where all agents get the best value
possible.
 Through iterative exchanges, agents can improve the agreement,
ensuring that no agent gets less than their initial offer, and ideally, each
agent improves their outcome.

3. Reaching Mutual Agreement:


 Bargaining is a process where agents exchange proposals, and
through this exchange, they can adjust their strategies and demands
until they arrive at a mutually acceptable solution. This process is
iterative, with each agent adjusting its position based on the
responses it receives.
 The goal is to reach a point where both agents are satisfied with the
terms of the agreement, either by finding common ground or by
making compromises.
4. Cooperative Behavior:
 Bargaining promotes cooperation rather than competition among
agents. Instead of focusing solely on individual goals, agents negotiate
to find a solution that benefits everyone involved.
 This fosters a collaborative environment where agents:
 Form Agreements: By bargaining, agents can agree on common
objectives, strategies, or tasks that they will pursue together.
 Create Alliances: Bargaining may lead to the formation of
temporary or permanent alliances among agents to achieve
shared goals.
5. Cooperation and Collaboration:
 In some cases, agents may need to cooperate to achieve a common
goal, such as solving a problem together or completing a task.
Bargaining enables agents to find terms of cooperation that are fair and
mutually beneficial.

For example, agents in a transportation network might negotiate the


allocation of routes to ensure optimal traffic flow while minimizing
delays for each agent.
 Augmentation among agents
 Augmentation among agents refers to the process where agents
enhance or improve each other's capabilities by collaborating or sharing
knowledge, resources, or skills.
 It is a key concept in multi-agent systems (MAS), where multiple agents
work together,either in cooperation or competition, to achieve better
outcomes than they would individually.

 The idea is to increase the overall efficiency, performance, or success of


the agents by combining their strengths or compensating for each
other's weaknesses.
 Key Ideas
1. Collaboration for Enhanced Performance:
 Agents can augment one another's capabilities by working
together. By dividing tasks or sharing information, agents can
achieve goals that would be difficult or impossible to reach
alone.
 This collaboration allows agents to combine their strengths and
mitigate individual limitations.
2. Resource Sharing:
 In a multi-agent system, agents may have different resources (such
computational power, storage, or energy).
 By augmenting each other with shared resources, agents can
overcome their limitations and operate more effectively.
 For instance, if one agent has access to a larger database, it can share
the data with other agents that have less access to information.
3. Knowledge Sharing:
 Augmentation can also occur when agents share knowledge or
insights that help each other improve decision-making or
problem-solving. This could involve sharing data, strategies, or
experiences that have been learned from the environment or
previous actions.
 For example, one agent might have learned a more efficient way to
navigate an environment or solve a subtask, and by sharing this
knowledge, it helps other agents perform better in similar situations.
Trust and reputation in multiagent system
 Trust and Reputation in Multi-Agent Systems (MAS) are
fundamental concepts for ensuring reliable and effective interactions
between agents. In MAS, agents often need to collaborate or compete
with each other to achieve their goals.
 Since agents may not always have perfect knowledge about each
other, and some agents might behave maliciously or unpredictably,
trust and reputation mechanisms are essential for facilitating
cooperation, ensuring fair exchanges, and maintaining the overall
stability of the system.

1. Trust in Multi-Agent Systems:


 Trust in MAS refers to the belief or confidence an agent has in the
behavior, capabilities, and reliability of another agent. An agent builds
trust based on its experiences or interactions with others.
 Trust helps agents decide whom to cooperate with, whom to rely on for
resources, and how to share information.
Example:
 In a marketplace MAS where agents buy and sell goods, an agent may
choose to purchase from another agent based on trust.
 If the seller has a history of delivering quality products on time, the
buyer will trust that the seller will continue to do so.
 If the seller is known to deceive buyers, trust will decrease, and the
buyer may avoid purchasing from them.

2. Reputation in Multi-Agent Systems:


 Reputation is the overall opinion or judgment of an agent based on
the history of its actions, as seen by other agents.
 Reputation is often used as a proxy for trust, as it reflects how well
an agent performs tasks and interacts within the system.
 While trust is an individual agent’s belief, reputation is a more
collective, social evaluation.
Example:
 In a review-based system like an online marketplace, customers can
rate sellers based on their experience.
 These ratings help form the seller's reputation. If a seller consistently
receives good ratings, their reputation grows, and new customers are
more likely to trust them.

 Role of Trust and Reputation in Multi-Agent Systems:


 Facilitating Cooperation
 Encouraging Good Behavior: Agents with a good reputation or high
trust are incentivized to continue their good behavior to maintain or
increase their standing in the system.
 Reducing Uncertainty: In MAS, agents often work in
dynamic,unpredictable environments. By relying on trust and
reputation, agents can make decisions even when they do not have
complete information. This helps reduce uncertainty about other
agents' behaviours.

You might also like