Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views60 pages

cs188 Fa23 Lec03

The document outlines the course announcements and key concepts related to Informed Search in Artificial Intelligence, including heuristics, Greedy Search, and A* Search. It discusses the importance of admissible heuristics for optimality in search algorithms and provides examples such as the Pancake Problem and the 8 Puzzle. Additionally, it covers the properties of A* Search, including its applications and the significance of heuristic design.

Uploaded by

zeyuanz18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views60 pages

cs188 Fa23 Lec03

The document outlines the course announcements and key concepts related to Informed Search in Artificial Intelligence, including heuristics, Greedy Search, and A* Search. It discusses the importance of admissible heuristics for optimality in search algorithms and provides examples such as the Pancake Problem and the 8 Puzzle. Additionally, it covers the properties of A* Search, including its applications and the significance of heuristic design.

Uploaded by

zeyuanz18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Announcements

▪ Project 0 (optional) is due Tuesday, January 24, 11:59 PM PT


▪ HW0 (optional) is due Friday, January 27, 11:59 PM PT
▪ Project 1 is due Tuesday, January 31, 11:59 PM PT
▪ HW1 is due Friday, February 3, 11:59 PM PT
CS 188: Artificial Intelligence
Informed Search

Fall 2022
University of California, Berkeley
Today

▪ Informed Search
▪ Heuristics
▪ Greedy Search
▪ A* Search

▪ Graph Search
Recap: Search
Recap: Search

▪ Search problem:
▪ States (configurations of the world)
▪ Actions and costs
▪ Successor function (world dynamics)
▪ Start state and goal test

▪ Search tree:
▪ Nodes: represent plans for reaching states
▪ Plans have costs (sum of action costs)

▪ Search algorithm:
▪ Systematically builds a search tree
▪ Chooses an ordering of the fringe (unexplored nodes)
▪ Optimal: finds least-cost plans
Example: Pancake Problem

Cost: Number of pancakes flipped


Example: Pancake Problem
Example: Pancake Problem
State space graph with costs as weights

4
2 3
2
3

4
3
4 2

3 2
2
4

3
General Tree Search

Action: flip top two Action: fliptoallreach


Path four goal:
Cost: 2 Cost:
Flip 4 flip three
four,
Total cost: 7
Informed Search
Search Heuristics
▪ A heuristic is:
▪ A function that estimates how close a state is to a goal
▪ Designed for a particular search problem
▪ Examples: Manhattan distance, Euclidean distance for
pathing

10

5
11.2
Example: Heuristic Function

h(x)
Example: Heuristic Function
Heuristic: the number of the largest pancake that is still out of place
3
4
h(x)

3
4
3 0
4
4 3
4
4 2
3
Greedy Search
Greedy Search
▪ Expand the node that seems closest…

▪ What can go wrong?


Greedy Search
b
▪ Strategy: expand a node that you think is …
closest to a goal state
▪ Heuristic: estimate of distance to nearest goal for
each state

▪ A common case:
b
▪ Best-first takes you straight to the (wrong) goal …

▪ Worst-case: like a badly-guided DFS

[Demo: contours greedy empty (L3D1)]


[Demo: contours greedy pacman small maze (L3D4)]
A* Search
A* Search

UCS Greedy

A*
Combining UCS and Greedy
▪ Uniform-cost orders by path cost, or backward cost g(n)
▪ Greedy orders by goal proximity, or forward cost h(n)
8 g=0
S h=6
h=1 g=1
e a
1 h=5

1 3 2 g=2 g=9
S a d G
h=6 b d g=4 e h=1
h=6 h=5 h=2
1 h=2 h=0
1 g=3 g=6
c b g = 10
h=7 c G h=0 d
h=2
h=7 h=6
g = 12
G h=0
▪ A* Search orders by the sum: f(n) = g(n) + h(n)
Example: Teg Grenager
When should A* terminate?

▪ Should we stop when we enqueue a goal?


h=2

2 A 2

S h=3 h=0 G

2 B 3
h=1

▪ No: only stop when we dequeue a goal


Is A* Optimal?
h=6

1 A 3

S h=7
G h=0

▪ What went wrong?


▪ Actual bad goal cost < estimated good goal cost
▪ We need estimates to be less than actual costs!
Admissible Heuristics
Idea: Admissibility

Inadmissible (pessimistic) heuristics break Admissible (optimistic) heuristics slow down


optimality by trapping good plans on the fringe bad plans but never outweigh true costs
Admissible Heuristics
▪ A heuristic h is admissible (optimistic) if:

where is the true cost to a nearest goal

▪ Examples:
4
15

▪ Coming up with admissible heuristics is most of what’s involved


in using A* in practice.
Optimality of A* Tree Search
Optimality of A* Tree Search
Assume:

▪ A is an optimal goal node
▪ B is a suboptimal goal node
▪ h is admissible

Claim:
▪ A will exit the fringe before B
Optimality of A* Tree Search: Blocking
Proof:

▪ Imagine B is on the fringe
▪ Some ancestor n of A is on the
fringe, too (maybe A!)
▪ Claim: n will be expanded before B
1. f(n) is less or equal to f(A)

Definition of f-cost
Admissibility of h
h = 0 at a goal
Optimality of A* Tree Search: Blocking
1. f(n) is less than or equal to f(A)

▪ Definition of f-cost says:
f(n) = g(n) + h(n) = (path cost to n) + (est. cost of n to A)
f(A) = g(A) + h(A) = (path cost to A) + (est. cost of A to A)
▪ The admissible heuristic must underestimate the true cost
h(A) = (est. cost of A to A) = 0
▪ So now, we have to compare:
f(n) = g(n) + h(n) = (path cost to n) + (est. cost of n to A)
f(A) = g(A) = (path cost to A)
▪ h(n) must be an underestimate of the true cost from n to A
(path cost to n) + (est. cost of n to A) ≤ (path cost to A)
g(n) + h(n) ≤ g(A)
f(n) ≤ f(A)
Optimality of A* Tree Search: Blocking
Proof:

▪ Imagine B is on the fringe
▪ Some ancestor n of A is on the
fringe, too (maybe A!)
▪ Claim: n will be expanded before B
1. f(n) is less or equal to f(A)
2. f(A) is less than f(B)

B is suboptimal
h = 0 at a goal
Optimality of A* Tree Search: Blocking
2. f(A) is less than f(B)

▪ We know that:
f(A) = g(A) + h(A) = (path cost to A) + (est. cost of A to A)
f(B) = g(B) + h(B) = (path cost to B) + (est. cost of B to B)
▪ The heuristic must underestimate the true cost:
h(A) = h(B) = 0
▪ So now, we have to compare:
f(A) = g(A) = (path cost to A)
f(B) = g(B) = (path cost to B)
▪ We assumed that B is suboptimal! So
(path cost to A) < (path cost to B)
g(A) < g(B)
f(A) < f(B)
Optimality of A* Tree Search: Blocking
Proof:

▪ Imagine B is on the fringe
▪ Some ancestor n of A is on the
fringe, too (maybe A!)
▪ Claim: n will be expanded before B
1. f(n) is less or equal to f(A)
2. f(A) is less than f(B)
3. n expands before B
▪ All ancestors of A expand before B
▪ A expands before B
▪ A* search is optimal
Properties of A*
Properties of A*

Uniform-Cost A*

b b
… …
UCS vs A* Contours

▪ Uniform-cost expands equally in all


“directions”
Start Goal

▪ A* expands mainly toward the goal,


but does hedge its bets to ensure
optimality Start Goal

[Demo: contours UCS / greedy / A* empty (L3D1)]


[Demo: contours A* pacman small maze (L3D5)]
Comparison

Greedy Uniform Cost A*


A* Applications
▪ Video games
▪ Pathing / routing problems
▪ Resource planning problems
▪ Robot motion planning
▪ Language analysis
▪ Machine translation
▪ Speech recognition
▪ …
[Demo: UCS / A* pacman tiny maze (L3D6,L3D7)]
[Demo: guess algorithm Empty Shallow/Deep (L3D8)]
Creating Heuristics
Creating Admissible Heuristics
▪ Most of the work in solving hard search problems optimally is in coming up
with admissible heuristics

▪ Often, admissible heuristics are solutions to relaxed problems, where new


actions are available

366
15

▪ Inadmissible heuristics are often useful too


Example: 8 Puzzle

Start State Actions Goal State

▪ What are the states?


▪ How many states?
▪ What are the actions?
▪ How many successors from the start state?
▪ What should the costs be?
8 Puzzle I
▪ Heuristic: Number of tiles misplaced
▪ Why is it admissible?
▪ h(start) = 8
▪ This is a relaxed-problem heuristic
Start State Goal State

Average nodes expanded


when the optimal path has…
…4 steps …8 steps …12 steps
UCS 112 6,300 3.6 x 106
TILES 13 39 227

Statistics from Andrew Moore


8 Puzzle II

▪ What if we had an easier 8-puzzle where


any tile could slide any direction at any
time, ignoring other tiles?

▪ Total Manhattan distance


Start State Goal State

▪ Why is it admissible?
Average nodes expanded
▪ h(start) = 3 + 1 + 2 + … = 18 when the optimal path has…
…4 steps …8 steps …12 steps
TILES 13 39 227
MANHATTAN 12 25 73
8 Puzzle III
▪ How about using the actual cost as a heuristic?
▪ Would it be admissible?
▪ Would we save on nodes expanded?
▪ What’s wrong with it?

▪ With A*: a trade-off between quality of estimate and work per node
▪ As heuristics get closer to the true cost, you will expand fewer nodes but usually
do more work per node to compute the heuristic itself
Semi-Lattice of Heuristics
Trivial Heuristics, Dominance

▪ Dominance: ha ≥ hc if

▪ Heuristics form a semi-lattice:


▪ Max of admissible heuristics is admissible

▪ Trivial heuristics
▪ Bottom of lattice is the zero heuristic (what
does this give us?)
▪ Top of lattice is the exact heuristic
Graph Search
Tree Search: Extra Work!
▪ Failure to detect repeated states can cause exponentially more work.

State Graph Search Tree


Graph Search
▪ In BFS, for example, we shouldn’t bother expanding the circled nodes (why?)

d e p

b c e h r q

a a h r p q f

p q f q c G

q c G a

a
Graph Search
▪ Idea: never expand a state twice

▪ How to implement:
▪ Tree search + set of expanded states (“closed set”)
▪ Expand the search tree node-by-node, but…
▪ Before expanding a node, check to make sure its state has never been
expanded before
▪ If not new, skip it, if new add to closed set

▪ Important: store the closed set as a set, not a list

▪ Can graph search wreck completeness? Why/why not?

▪ How about optimality?


A* Graph Search Gone Wrong?
State space graph Search tree

A S (0+2)
1
1
S h=4
C
h=1 A (1+4) B (1+1)
h=2 1
2
3 C (2+1) C (3+1)
B

h=1
G G (5+0) G (6+0)

h=0
Consistency of Heuristics
▪ Main idea: estimated heuristic costs ≤ actual costs

A ▪ Admissibility: heuristic cost ≤ actual cost to goal

1 h(A) ≤ actual cost from A to G


h=4 C h=1
▪ Consistency: heuristic “arc” cost ≤ actual cost for each arc
h=2
h(A) – h(C) ≤ cost(A to C)
3
▪ Consequences of consistency:
▪ The f value along a path never decreases
G h(A) ≤ cost(A to C) + h(C)

▪ A* graph search is optimal


Optimality of A* Graph Search
Optimality
▪ Tree search:
▪ A* is optimal if heuristic is admissible
▪ UCS is a special case (h = 0)

▪ Graph search:
▪ A* optimal if heuristic is consistent
▪ UCS optimal (h = 0 is consistent)

▪ Consistency implies admissibility

▪ In general, most natural admissible heuristics


tend to be consistent, especially if from
relaxed problems
A*: Summary
A*: Summary
▪ A* uses both backward costs and (estimates of) forward costs

▪ A* is optimal with admissible / consistent heuristics

▪ Heuristic design is key: often use relaxed problems


Search and Models

▪ Search operates over


models of the world
▪ The agent doesn’t
actually try all the plans
out in the real world!
▪ Planning is all “in
simulation”
▪ Your search is only as
good as your models…
Search Gone Wrong?
Search Gone Wrong?
Appendix: Search Pseudo-Code
Tree Search Pseudo-Code
Graph Search Pseudo-Code

You might also like