Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
15 views22 pages

AI Lecture 3

The document discusses problem-solving agents in artificial intelligence, focusing on search agents that identify sequences of actions to reach goals. It outlines problem formulation, including initial states, actions, transition models, goal tests, and path costs, with examples like the 8-queen problem and traveling in Romania. Various problem types are categorized based on observability, determinism, and the number of agents involved.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views22 pages

AI Lecture 3

The document discusses problem-solving agents in artificial intelligence, focusing on search agents that identify sequences of actions to reach goals. It outlines problem formulation, including initial states, actions, transition models, goal tests, and path costs, with examples like the 8-queen problem and traveling in Romania. Various problem types are categorized based on observability, determinism, and the number of agents involved.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Artificial Intelligence

CSC-462
Today’s Learning

Problem-Solving (Search) Agents


Problem types
Problem formulation
Example problems
Search and Search Agent
• Suppose an agent can execute several actions
immediately in a given state
• Agent’s job is to identify and execute a
sequence of actions until it reaches the goal.
• The immediate action which has the best sequence
(according to the performance measure) is then the
solution.
• Finding this sequence of actions is called search, and
the agent which does this is called the problem-solver
(Search Agent, Goal based Agent).
• NB: Its possible that some sequence might fail, e.g.,
getting stuck in an infinite loop, or unable to find the
goal at all.
Examples

EXPLORE
!
Examples

The 8-queen problem: on a chess b oard, place 8 queens so


t h a t no queen is attacking any other horizontally, vertically or
diagonally.
N u m b er of p ossible sequences to investigate:

64 ∗63 ∗62 ∗... ∗57 = 1.8


× 10 1 4
Linking Search to Trees
Visualize the concept of a graph
Searching along different paths of the graph until you
reach the solution.
Problem formulati on
• Initi al s t a te: the state in which the agent starts
• S tates: All states reachable f rom the initi al state by any se-
quence of acti ons ( S t a t e space)

• Acti ons: possible actions available to the agent .At a state s,


Actions( s) returns the set of acti ons t h a t can b e
state s. ( A c t iin
executed o n space)

• T ransiti on model: descripti on of w h a t acti on do


A each es
Results( s, a)
• Goal test: determines if a given state is a goal state

• Path cost: fu n cti on t h a t assigns a numeric to a p


cost
w. r.t . p erformance measure ath
Examples

• States:
All arrangements of 0 to 8 queens on the b
oard.
• Initial s t a te:
No queen on the b oard
• Actions:
Add a queen to any empty square
• T ransition m o del:
Updated b oard
• Goal test:
8 queens on the board w it h none attacked
Problem Solving Agent
Example: Traveling in Romania
No map vs. Map
physical deliberative
search search
Example: Traveling in Romania
On holiday in Romania; currently in Arad.
Formulate goal: Be in Bucharest
Formulate problem:
– States:
various cities
– Actions:
– drive between cities
Find solution:
– Sequence of cities, e.g., Arad, Sibiu, Fagaras,
Bucharest.
Partial search tree for route finding
from Arad to Bucharest
Search Problem Types
Static: The configuration of the graph (the city map) is
unlikely to change during search

Observable: The agent knows the state (node)


completely, e.g., which city I am in currently
Discrete: Discrete number of cities and routes
between
them

Deterministic: Transiting from one city (node) on one


route, can lead to only one possible city

Single-Agent: We assume only one agent searches at one


time, but multiple agents can also be used.
Problem types
Deterministic, fully observable  single-state problem
– Agent knows exactly which state it will be in; solution is a
sequence
Non-observable  Sensor-less problem (conformant
problem)
– Agent may have no idea where it is; solution is a
sequence
Nondeterministic and/or partially observable 
contingency problem
– percepts provide new information about current state
– often interleave search, execution
Unknown state space  exploration problem
Example: Vacuum world
• Single-state, start in #5.
Solution?
• [Right, Pick]
• Sensor-less, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
Example: Vacuum world
• Single-state, start in #5.
Solution?
• [Right, Pick]
• Sensor-less, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
• [Right, Pick, Left,
Pick]
• Contingency
• Nondeterministic:
Pick may dirty a
clean carpet
• Partially observable:
location, dirt at
Single-state problem formulation
A problem is defined by four items:
1. initial state e.g., "at Arad"
2. actions or successor function S(x) = set of action–
state pairs
– e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }
3. goal test, can be
– explicit, e.g., x = "at Bucharest"
– implicit, e.g., Checkmate(x)
4. path cost (additive)
– e.g., sum of distances, number of actions executed,
etc.
– c(x,a,y) is the step cost, assumed to be ≥ 0
A solution is a sequence of actions leading from the
initial state to a goal state.
Vacuum world state space graph

states?
actions?
goal test?
path cost?
Vacuum world state space graph

states? integer dirt and robot location


actions? Left, Right, Pick goal
test? no dirt at all locations
path cost? 1 per action
Example: The 8-puzzle

States?
Actions?
Goal test?
Path
cost?
Example: The 8-puzzle

States? locations of tiles


Actions? move blank left, right, up, down
Goal test? = goal state (given)
Path cost? 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
References
 Artificial Intelligence: A Modern Approach. S. Russell, and P. Norvig. Series
in Artificial Intelligence Prentice Hall, Upper Saddle River, NJ, Third edition,
(2010 ). URL: http://aima.cs.berkeley.edu/

You might also like