Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
15 views22 pages

Lecture 2

The document discusses intelligent agents and decision support systems. It covers topics such as omniscience, rationality vs perfection, exploration and learning, agent autonomy, and defining an agent's task environment using PEAS (Performance Measure, Environment, Actuators, Sensors). Examples of task environments and agent types are provided. The key aspects of an agent's architecture and a table driven agent program are also described.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views22 pages

Lecture 2

The document discusses intelligent agents and decision support systems. It covers topics such as omniscience, rationality vs perfection, exploration and learning, agent autonomy, and defining an agent's task environment using PEAS (Performance Measure, Environment, Actuators, Sensors). Examples of task environments and agent types are provided. The key aspects of an agent's architecture and a table driven agent program are also described.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

05-Oct-23

EC-350 AI and Decision Support Systems

Week 2
Intelligent Agents

Dr. Arslan Shaukat

Acknowledgement: Lecture slides material from


Stuart Russell

Omniscience
▪ An omniscience agent knows the actual outcomes of its
actions and can act accordingly
▪ Omniscience is impossible in reality
▪ Road crossing example

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 2

1
05-Oct-23

Omniscience - Perfection
▪ Rationality is NOT the same as Perfection
▪ Rationality maximizes expected performance
▪ Perfection maximizes actual performance

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 3

Exploration, Learning
▪ Doing actions in order to modify future percepts,
sometimes called information gathering, is an important
part of rationality.
▪ It performs such actions to increase its perception
▪ This is also called Exploration
– as taken by vacuum cleaner agent
▪ A rational agent not only gather information but also
learn as much as possible from what it perceives

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 4

2
05-Oct-23

Agent Autonomy
▪ The capacity to compensate for partial or incorrect prior
knowledge by learning
▪ An agent is called autonomous if its behavior is
determined by its own experience (with ability to learn
and adopt)
▪ A truly autonomous agent should be able to operate
successfully in a wide variety of environments

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 5

Task Environment
▪ Problems to which rational agents are the solutions.
▪ PEAS
– P – Performance Measure
– E – Environment
– A – Actuators
– S – Sensors
▪ First step in designing an agent must be to define the task
environment

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 6

3
05-Oct-23

Automated Taxi Driver Agent

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 7

Automated Taxi Driver Agent

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 8

4
05-Oct-23

PEAS - Example
▪ Automated Taxi Driver Agent
– Performance measure: Safe, correct destination, minimizing fuel
consumption, min wear and tear, fast, legal, comfortable trip, maximize
profit
– Environment: Roads, other traffic, pedestrians, customers, stray animals,
police cars, signals, potholes
– Actuators: Steering, accelerator, brake, signal, horn, display
– Sensors: TV Cameras, sonar, speedometer, accelerometer, GPS, odometer,
engine sensors, keyboard, mic

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 9

Agent Type and PEAS


Agent Type Performance Environment Actuators Sensors
Measures

Medical Diagnostic Healthy patients, Patients, hospital, Display questions, Keyboard entry of
minimize costs, staff tests, diagnoses, symptoms, findings,
treatments, referrals patients’ answers

Satellite image Correct image Downlink from Display Color pixel arrays
analysis system characterization orbiting satellite categorization of
scene
Part picking robot Percentage of parts Conveyor belt with Jointed arm and Cameras, joint angle
in correct bins parts, bins hand sensors

Refinery controller Maximize purity, Refinery, operators Valves, pumps, Temperature,


yield safety heaters, displays pressure, chemical
sensors

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 10

10

5
05-Oct-23

Environment Types
▪ Fully observable vs. partially observable:
– An agent's sensors give it access to the complete state of the
environment at each point in time.
– Partially observable because of noisy and inaccurate sensors
▪ Deterministic vs. stochastic:
– The next state of the environment is completely determined by
the current state and the action executed by the agent.
– If the environment is partially observable, then it could appear
to be stochastic.
▪ Episodic vs. sequential:
– The agent's experience is divided into atomic "episodes" (each
episode consists of the agent perceiving and then performing a
single action)
– choice of action in each episode depends on the past episode.
05/10/2023 11
EC-350 AI and DSS Dr Arslan Shaukat EME (NUST)

11

Environment Types
▪ Static vs. dynamic:
– The environment is unchanged while an agent is deliberating.
– The environment is semi-dynamic if the environment itself does
not change with the passage of time but the agent's
performance score does
▪ Discrete vs. continuous:
– A limited number of distinct states
– Clearly defined percepts and actions.
▪ Single agent vs. multiagent:
– An agent operating by itself in an environment.
– Chess is a competitive multiagent environment.
– Taxi-driving is a partially cooperative multiagent environment.
05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 12

12

6
05-Oct-23

Examples of Task Environments

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 13

13

Structure of Agents
▪ Agent Program
– Implements the agent function— the mapping from percepts to
actions
– Runs on some sort of computing device, we may call it the
architecture
– It may be a plain computer or may include special hardware

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 14

14

7
05-Oct-23

Architecture
▪ The architecture makes the percepts from the sensors
available to the program, runs the program, and feeds the
program's action choices to the actuators as they are
generated.
▪ The relationship
– Agent = architecture + program

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 15

15

Table Driven Agent


function TABLE-DRIVEN-AGENT(percept) returns an action
static: percepts, a sequence, initially empty
table, a table of actions, indexed by percept sequences,
initially fully specified

append percept to the end of percepts


action  LOOKUP (percepts, table)
return action

Figure: The TABLE DRIVEN AGENT program is invoked for each new
percept and returns an action each time. It retains the complete percept
sequence in memory.

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 16

16

8
05-Oct-23

Table Driven Agent


▪ Disadvantages:
– (a) no physical agent in this universe will have the space to
store the table,
– (b) the designer would not have time to create the table,
– (c) no agent could ever learn all the right table entries from its
experience

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 17

17

Agent Program
▪ The key challenge for AI is to find out how to write
programs that produce rational behavior from a smallish
program rather than from a vast table.

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 18

18

9
05-Oct-23

Types of Agents
▪ 5 types:
▪ Simple reflex agents
– respond directly to percepts
▪ Model-based reflex agents
– maintain internal state to track aspects of the world that are not evident in
the current percept.
▪ Goal-based agents
– act to achieve their goals, and
▪ Utility-based agents
– try to maximize their own expected "happiness" or utility
– Utility function
▪ Learning Agents
– Agents can improve their performance through learning

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 19

19

Simple Reflex Agents

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 20

20

10
05-Oct-23

Model-based Reflex Agents

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 21

21

Goal-based Agents

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 22

22

11
05-Oct-23

Utility-based Agents

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 23

23

Solving Problems by Searching

24

12
05-Oct-23

Problem Solving Agent


▪ Problem solving agent decides what to do by finding
sequence of actions that lead to desirable states and hence
solution
▪ 2 types of search algorithms:
– Uninformed search algorithms: that are given no information
about the problem other than its definition.
– Informed search algorithms: can do quite well given some
guidance on where to look for solutions
▪ Intelligent agents are supposed to maximize their
performance measure.
▪ Achieving this is sometimes simplified if the agent can
adopt a goal and aim at satisfying it
05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 25

25

Example
▪ On holiday in Romania; currently in Arad.
▪ Flight leaves tomorrow from Bucharest

▪ Formulate goal:
– be in Bucharest

▪ Formulate problem:
– states: various cities
– actions: drive between cities

▪ Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 26

26

13
05-Oct-23

Goal & Problem Formulation


▪ Goals help organize behaviour by limiting the objectives that the
agent is trying to achieve and hence the actions it needs to consider
▪ Goal formulation, based on current situation and the agent’s
performance measure, is the first step in problem solving
▪ Problem Formulation is the process of deciding what actions and
states to consider, given a goal
▪ In general, an agent with several immediate options of unknown
value can decide what to do by first examining different possible
sequences of actions that leads to a state of known value, and then
choosing the best sequence

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 27

27

Example
One possible route
Arad -> Sibiu -> Ramnincu Valcea -> Pitesti -> Bucharest

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 28

28

14
05-Oct-23

Search and Solution


▪ The process of looking for sequence of actions to arrive
at a goal is called search
▪ Search algorithm takes problem as input and returns
solution in form of action sequence
▪ Once a solution is found, the recommended actions can
be carried out. This is called execution.
▪ Formulate, search, execute design for agent

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 29

29

Problem Definition
▪ A Problem can be defined by the following 5
components:
– Initial state defines the start state
– Actions (s) A description of the possible actions available to
the agent. Given a particular state s, ACTIONS(s) returns the
set of actions that can be executed in s
– Transition model/Result (s, a) returns the state that results
from doing action a in state s

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 30

30

15
05-Oct-23

Problem Definition (cont’d)


– Goal Test (s) a function, when a state is passed to it, returns
True or False value if it is a goal or not
– Path Cost an additive function which assigns a numeric cost to
each path. This function also reflect agent’s own performance
measure.
• There may be step cost also if a path contains more than one steps. It
may be denoted by c(s, a, s’), where a is action and s & s’ are the
current and new states respectively.
▪ A path or solution in the state space is the sequence of
states connected by sequence of actions
▪ Together, the initial state, actions and new states
implicitly defines the State space of the problem

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 31

31

Find a Route – Arid to Bucharest

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 32

32

16
05-Oct-23

Problem Formulation - Example


▪ Problem Description: find an optimal path from Arad to Bucharest
▪ Initial State= In(Arad)
▪ Actions (s) = set of possible actions agent can take
{goto(Zerind), goto(Sibiu), goto(Timisora)}
▪ Result (s, a): Result (In(Arad), Go(Zerind)) = In(Zerind) .
▪ Goal Test: determine if at goal
– can be explicit, e.g., In(Bucharest)
▪ Path Cost: cost of each step added together
– e.g., sum of distances, number of actions executed, etc.
– The step cost is assumed to be ≥ 0
▪ A solution is a sequence of actions leading from the initial state to
a goal state, solution quality is measured by path cost

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 33

33

Vacuum World State Space Graph

▪ states? The agent is in one of two locations, 2*2*2 = 8 possible world states
▪ initial state: any state can be designated as initial state
▪ Actions (a): {<left>, <right>, <suck>, <noop>}
▪ Result(s,a): <right clean>, <left clean>
▪ goal test: no dirt at all locations
▪ path cost: 1 per action

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 34

34

17
05-Oct-23

Example: The 8-puzzle

▪ States: locations of each tile and blank,


9!/2=181,440 reachable world states
▪ Initial state: any state can be designated
▪ Action(s): {<left>, <right>, <up>, <down>}
▪ Result(s,a): new state after taking any of above actions
▪ goal test: matches the goal state (given)
▪ path cost:1 per move

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 35

35

Example: 8-Queen Problem

States: any arrangement of 0 to


8 queens on the board
Initial State: no queen on the
board
Actions: add a queen at any
square
Result: new board state with
queen added
Goal test: 8 queens on the board
– none attacked
Path Cost: 1 per move

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 36

36

18
05-Oct-23

Searching for Solutions


▪ We solve problems by searching the state space
▪ State space is represented by a graph or a tree

▪ We start at root node, check if it is a goal state


▪ If not, apply the Result/successor function to generate
new state (node)
▪ The choice of which state to expand is determined by the
search strategy

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 37

37

Tree Search Example

38
05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST)

38

19
05-Oct-23

Tree Search Example

39
05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST)

39

Tree Search Example

40
05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST)

40

20
05-Oct-23

Assumptions about Node


▪ Node is a data structure with five components
– State: the state in the state space to which node
correspond
– Parent node: the node which generated this node
– Action: the action that was applied to parent to
generate this
– Path-cost: the cost, denoted by g(n), from initial state
to this node
– Depth: the number of steps along the path from initial
state to the goal

41
05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST)

41

Fringe
▪ We shall use a data structure called “fringe”
▪ This will contain the collection of nodes which has been
generated but not expanded
▪ The search strategy would decide next node to be
expanded from this list
▪ Assume collection of nodes is represented as a Queue,
Stack etc.

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 42

42

21
05-Oct-23

Search Strategies Evaluation


▪ Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: how long does it take
– space complexity: how much memory is needed
– optimality: does it always find a least-cost solution?
▪ Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the shallowest goal-node
– m: maximum depth of the tree (may be ∞)

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 43

43

Uninformed Search Strategies


▪ Uninformed (Blind) search strategies use only the
information available in the problem definition
– Breadth-first Search
– Uniform-cost search
– Depth-first search
– Depth-limited search
– Iterative deepening search

05/10/2023 EC-350 AI and DSS Dr Arslan Shaukat EME (NUST) 44

44

22

You might also like