1.
Rational Agents:
○ Central to AI, rational agents aim to perform actions that maximize their
performance measure.
○ Agents perceive their environment through sensors and act through
actuators.
○ Percept Sequence: The complete history of what the agent has perceived.
○ Agent Function: Maps percept sequences to actions, defining the agent's
behavior.
○ Agent Program: Implements the agent function within a physical system.
2. Task Environments:
○ PEAS Description: Performance, Environment, Actuators, Sensors.
○ Properties of Task Environments:
■ Fully Observable vs. Partially Observable: Can the agent see the
complete state of the environment?
■ Single Agent vs. Multiagent: Is the agent operating alone or with
others?
■ Deterministic vs. Stochastic: Is the next state of the environment
predictable?
■ Episodic vs. Sequential: Are actions independent or do they have
long-term consequences?
■ Static vs. Dynamic: Does the environment change while the agent
is deliberating?
■ Discrete vs. Continuous: Are the states, time, and actions discrete
or continuous?
■ Known vs. Unknown: Does the agent know the "laws of physics" of
the environment?
3. Agent Types:
○ Simple Reflex Agents: React to current percepts without considering
history.
○ Model-Based Reflex Agents: Maintain internal state to handle partial
observability.
○ Goal-Based Agents: Use goals to guide actions, considering future
consequences.
○ Utility-Based Agents: Maximize expected utility, balancing multiple goals
and uncertainties.
4. Learning Agents:
○ Learning Element: Improves the agent's performance over time.
○ Performance Element: Selects actions based on current knowledge.
○ Critic: Provides feedback on the agent's performance.
○ Problem Generator: Suggests exploratory actions to improve learning.
Key Concepts:
● Rationality: An agent is rational if it selects actions that maximize its
performance measure given its percept sequence and prior knowledge.
● Performance Measure: Evaluates the sequence of environment states resulting
from the agent's actions.
● Environment Complexity: The difficulty of designing an agent depends on the
properties of the environment (e.g., stochastic, dynamic, multiagent).
Examples:
● Vacuum-Cleaner World: A simple environment with two locations, where the
agent can move, suck dirt, or do nothing.
● Automated Taxi Driver: A complex environment requiring navigation, interaction
with other drivers, and handling various road conditions.
Challenges:
● Partial Observability: Agents must maintain internal state to handle unobserved
aspects of the environment.
● Uncertainty: Stochastic environments require agents to make decisions under
uncertainty, often using probabilistic models.
● Learning: Agents must adapt to new environments and improve their
performance over time through learning.
Conclusion:
The design of intelligent agents involves understanding the task environment, defining
appropriate performance measures, and selecting the right type of agent architecture.
Rationality is achieved by maximizing the expected performance measure, considering
the agent's percepts, knowledge, and possible actions. Learning agents further enhance
their performance by adapting to new information and experiences.