Chapter 3
Problem Solving by Searching and
Constraint Satisfaction Problem
Melaku M.
Book: Artificial Intelligence, A Modern Approach (Russell & Norvig) 1
Problem Solving by Searching
An important application of AI is problem solving.
Problem solving is a process of generating solution to a given problem.
“How an agent can find a sequence of actions that achieves its goals
when no single action will do?”
Searching is a process to find the solution for a given set of problems.
A search algorithm takes a problem as input and returns a solution in the
form of an action sequence.
Search algorithms are one of the most important areas of AI 2
Problem Solving Agents
• Problem-solving agents are goal-driven agents and focuses on
satisfying the goal.
• In AI, Search techniques are universal or well-known problem-
solving methods.
• Rational agents or Problem-solving agents in AI mostly used
these search strategies or algorithms to solve a specific problem
and provide the best result.
3
Problem Solving Agent
Goal Formulation Problem Search Solution Execution
Formulation • Process of looking • an action • Process of
• Limiting the
• Deciding what for a solution sequence that executing
Objectives of actions should be leads from the sequence of
• Search algorithm
Current State taken to achieve takes problem as initial state to a actions (solution)
formulated goal. input and returns goal state.
• Describe a general solution
problem as a
search problem
Steps performed by problem solving agents
4
Problem Formulation
(specify Outcome/result of each action)
Conditions the agent is trying to meet
5
Cost function i.e. distances between place, number of steps, etc.
Example : Holiday Planning
Imagine an agent in the city of Arad, Romania, enjoying a touring holiday.
The agent’s performance measure contains many factors: it wants to improve
its suntan, improve its Romanian, take in the sights, enjoy the nightlife (such as
it is), avoid hangovers, and so on. Now, suppose the agent has a non-
refundable ticket to fly out of Bucharest the following day. In that case, it
makes sense for the agent to adopt the goal of getting to Bucharest. Courses of
action that don’t reach Bucharest on time can be rejected without further
consideration and the agent’s decision problem is greatly simplified.
6
7
Formulate the goal and problem for Romanian problem
• Agent currently in Arad, on enjoying touring holiday
in Romania; have non-refundable ticket to fly out of
Bucharest tomorrow.
– Formulate goal: be in Bucharest
– Formulate problem:
• States: various cities
• Action: drive across cities
-Search: sequence of cities
8
Initial state: a state where an agent starts in.
• e.g. IN(Arad)
Actions: possible actions/move available to an agent.
• IN(ARAD)->GO(Zerind), GO(Siblu), GO(Timisoara)
Transition models: specify the outcome/expected effect of each action
• RESULT(s, a) that returns the state that results from doing action a in state s.
• Result(IN(Arad),GO(Zerind)) IN(Zerind)
Goal Test: goal test function that check’s whether a given state is goal
• IN(x)==IN(Bucharest)
Path Cost: The step cost of taking action a in state s to reach state s′ is denoted by
c(s, a, s′).
• C(IN(Arad), GO(Zerind), IN(Zerind)’) = 75 9
A simple problem-solving agent algorithm
10
Description of a simple problem-solving agent algorithm
• Figure 3.1 A simple problem-solving agent. It first
formulates a goal and a problem, then agent calls a search
procedure, because search procedure searches for a
sequence of actions that would solve the problem, and
then executes the actions one at a time —typically, the
first action of the sequence—and then removing that step
from the sequence. When this is complete, it formulates
another goal and starts over.
11
Problem formulation and Abstraction
Real world is absurdly complex. State space must be abstracted for problem
solving.
Abstraction is the process of removing detail from a representation or
formulating a problem in concise/precise way.
The abstraction is useful because carrying out the actions in the solution is
easier than the original problem.
– We proposed a formulation of the problem of getting to Bucharest in terms
of the initial state, actions, transition model, goal test, and path cost. This
formulation seems reasonable, but it is still a model—an abstract
mathematical description—and not the real thing. 12
Cont.…
– we must abstracting the state description
• Compare the simple state description we have chosen, In(Arad), but the
actual state of the world includes so many things: road condition, weather,
rest stops, etc.
• All these considerations are left out of our state descriptions because they
are irrelevant to the problem of finding a route to Bucharest
– we must abstract the actions
• there are many actions that we omit altogether e.g. turning on the radio,
turn steering wheel etc., because they are irrelevant formulating actions13 .
Example of Problems
• Toy Problem – is intended to illustrate or exercise various problem-solving
methods. They can be given concise and exact description.
• Examples: Vacuum world, Tic-Tac-Toe, Chess, Sliding-blocks puzzles, etc.
• Real-World Problem – is one whose solutions people actually care about.
They may not have a single-agreed upon description.
• Examples: Route-finding problems, TSP, VLSI layout, Robot navigation,
Automatic assembly sequencing, Protein design, etc.
14
Example: 8-puzzle
• States?
• Initial state?
• Actions?
• Goal test?
• Path cost?
15
Example: 8-puzzle
States: location of each of the eight numbered tiles and a blank space.
Initial state: Any state where blank space exists.
Actions: Sliding blank space Right, Left, Up, Down
Transition model: Given a state and action, this returns the resulting state.
Assume tile 5 as a starting state and if we apply right to the
start state, the resulting state has the 5 and the blank
switched. Blank space is switched to L,R, U, and D
Goal test: Check whether a state matches goal configuration shown in fig.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
16
8-Queens Problem
Figure: Almost a solution to the 8-queens problem 17
8-Queens Problem
• The goal of the 8-queens problem is to place eight queens on a
chessboard such that no queen attacks any other. A queen can attack
another queen in the same row, column or diagonal.
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square. Note: it is not
attacked by any other queen
• Transition model: Returns the board with a queen added to the
specified square.
• Goal test: 8 queens are on the board, none attacked.
• Path cost: not needed because, only final states counted
18
Try by yourself: vacuum world
• States?
• Initial state?
• Actions?
• Goal test?
• Path cost?
19
Goal Problem Search Solution Execution
Formulation Formulation • Looking for • Search algorithm • Executions of the
• Limiting the • Action and States sequences of • Takes a problem best solution
Objectives of actions as input and
Current State returns a solution
20
Basic search algorithms
• How do we find the solutions of previous problems?
– Search the state space (remember complexity of space depends on
state representation)
– Here: search through explicit tree generation
– A search tree is used to model the sequence of actions. It is
constructed with initial state as the root. The actions taken make the
branches and the nodes are results of those actions (successor
function).
– A node has depth, path cost and associated state in the state space.
21
continued
• Search involves moving the nodes from unexplored
region to the explored region.
• Strategical order of these moves performs a better
search. The moves are also known as node expansion.
22
Simple Tree Search Algorithm
function TREE-SEARCH(problem, strategy) return solution or failure
Initialize search tree to the initial state of the problem
do
if no candidates for expansion then return failure
choose leaf node for expansion according to strategy
if node contains goal state then return solution
else expand the node and add resulting nodes to the search tree
end do
23
State space vs. search tree
• A state is a (representation of) a physical configuration
• A node is a data structure belong to a search tree
– A node has a parent, children, … and includes path cost, depth, …
– Here node= <state, parent-node, action, path-cost, depth>
– FRINGE= contains generated nodes which are not yet expanded or tested.
24
Tree search algorithm
function TREE-SEARCH(problem,fringe) return a solution or failure
fringe ← INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if EMPTY?(fringe) then return failure
node ← REMOVE-FIRST(fringe)
if GOAL-TEST[problem] applied to STATE[node] succeeds
then return SOLUTION(node)
fringe ← INSERT-ALL(EXPAND(node, problem), fringe)
25
Tree search algorithm Contd..
function EXPAND(node,problem) return a set of nodes
successors ← the empty set
for each <action, result> in SUCCESSOR-FN[problem](STATE[node]) do
s ← a new NODE
STATE[s] ← result
PARENT-NODE[s] ← node
ACTION[s] ← action
PATH-COST[s] ← PATH-COST[node] + STEP-COST(node, action,s)
DEPTH[s] ← DEPTH[node]+1
add s to successors
return successors
26
Search Strategies
• There are 2 kinds of search, based on whether they
use information about the goal.
– Uninformed Search
• does not use any domain knowledge (Blind search )
– Informed Search
• uses domain knowledge (Heuristic search)
27
Search Strategies
• A strategy is defined by picking the order of node expansion
• Performance Measures:
– Completeness – does it always find a solution if one exists?
– Time complexity – number of nodes generated/expanded
– Space complexity – maximum number of nodes in memory
– Optimality – does it always find a least-cost solution
• Time and space complexity are measured in terms of
– b – maximum branching factor of the search tree
– d – depth of the least-cost solution
– m – maximum depth of the state space (may be ∞)
28
Uninformed Search Strategies
29
Uninformed Search Strategies
• Uninformed strategies use only the information
available in the problem definition
– Breadth-first search
– Depth-first search
– Iterative deepening search
– Depth-limited search
– Uniform-cost search
30
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at end
– New nodes are inserted at the end of the queue
31
Evaluation of Breadth-First Search
– Complete? Yes (if b is finite)
– Time? b + b2 + b3 + : : : + bd = O(bd)
– Space? O(bd) (keeps every node in memory)
– Optimal? Yes (if cost = 1 per step)
– Space is the big problem; can easily generate nodes at 100MB/sec
32
Depth first search
• Expand deepest unexpanded node
• Implementation: fringe = LIFO stack, i.e., put successors at front
33
Evaluation of Depth-First Search
• Completeness :
– NO unless search space is finite.
• Time complexity: O(b m )
• Space complexity: O(bm)
• Optimality: No
34
Depth-Limited Search Algorithm:
Failure of depth-first search can be alleviated by supplying depth-first
search with a predetermined depth limit ℓ.
= depth-first search with depth limit l.
That is, nodes at depth ℓ are treated as if they have no successors. This
approach is called depth-limited search.
The depth limit solves the infinite-path problem.
Depth-limited search can be terminated with two kind of failure:
– Standard failure value: It indicates that problem no have solution.
– Cutoff failure value: It indicates no solution within a given depth limit.
Evaluation of DLS
• Completeness: Incomplete if we choose ℓ < d, that is, the
shallowest goal is beyond the depth
limit.
• Optimal: it is special case of DFS and nonoptimal if we choose
ℓ > d.
• Time Complexity: Time complexity of DLS algorithm is O(bℓ).
Where ℓ is depth limit.
• Space Complexity: Space complexity of DLS algorithm is
O(bℓ).
Iterative Deepening Search
• Combines advantages of both breadth-first and depth-first search
• By continuously incrementing the depth limit by one until a solution is found
• By using a depth-first approach on every iteration, iterative deepening search first
avoids the memory cost of breadth-first search.
Iteration 1(b) 1: A
Iteration 2(b2): A B C
Iteration 3(b3): A B D E C F G H
Iteration 4(b4): A B D I J E K L M C F N
37
Evaluation of Iterative Deepening Search
• Complete? Yes
• Time? db + (d − 1)b2 + : : : +(1)bd = O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1
38
Comparison of Strategies
• Breadth-first is complete and optimal, but has high
space complexity
• Depth-first is space efficient, but neither complete
nor optimal
• Iterative deepening combines benefits of DFS and
BFS and is asymptotically optimal
Informed Search Strategies
40
Informed search strategies
• Informed search strategy uses problem-specific
knowledge beyond the definition of the problem itself.
• Can find solutions more efficiently than an uninformed
strategy.
• Also called heuristic search
41
……
• Informed search strategies (Heuristic Search):
– Best-first search
• Greedy Best-first search
• A* search
42
Best First Search
• Best-first search is an instance of the general TREE-SEARCH algorithm.
• a node is selected for expansion based on an evaluation function, f(n).
– f(n) tells you the approximate distance of a node, n, from a goal node
– f(n) is constructed as a cost estimate, so the node with the lowest evaluation is expanded
first.
• The choice of f determines the search strategy.
• Most best-first algorithms include as a component of f a heuristic function, denoted h(n):
– h(n) = estimated cost of the cheapest path from the state at node n to a goal state.
• A Heuristic is an operationally-effective bit of information on how to direct search in a
problem space.
• Heuristics are only approximately correct. Their purpose is to minimize search on average.
• n is a goal node, then h(n) = 0.
43
Greedy best-first search
• expand the node that is closest to the goal
• evaluates nodes by using just the heuristic function;
– that is, f(n) = h(n).
– use the STRAIGHT-LINE DISTANCE(hSLD).
– “greedy”—at each step it tries to get as close to the goal
as it can.
44
Example: Route-finding problems in Romania
45
For this particular problem, greedy best-first search using
…… hSLD finds a solution without ever expanding a node that
is not on the solution path; hence, its search cost is
minimal.
46
Evaluation of greedy best-first search
Optimal?
No!
Found: Arad Sibiu Fagaras Bucharest (450km)
47
Evaluation of greedy best-first search
Complete?
No – can get stuck in loops,
e.g., Iasi Neamt Iasi Neamt …
goal
48
Evaluation of greedy best-first search
• Time?
– O(bm) – worst case (like Depth First Search)
– But a good heuristic can give dramatic improvement
• Space?
– O(bm) – keeps all nodes in memory
49
A* search
• Best-known form of best-first search.
• pronounced “A-star search”
• Idea: avoid expanding paths that are already expensive.
– Evaluation function f(n)=g(n) + h(n) A*
• g(n) gives the path cost from the start node to node
• h(n) is the estimated cost of the cheapest path from n to the goal
• f(n) = estimated cost of the cheapest solution through n (node with
the lowest value of g(n) + h(n)).
50
….
51
A* search, evaluation
• Completeness: YES
• Time complexity: (exponential with path length)
• Space complexity:(all nodes are stored)
• Optimality: YES
Avoiding Repeated States
• The path from the first state to the next state and back to the first state again is called a repeated
state in the search tree, which is generated by a loopy path.
• This makes search tree infinite because, there is no limit to how often one can traverse a loop.
• loops can cause certain algorithms to fail, making otherwise solvable problems unsolvable.
• Fortunately, there is no need to consider loopy paths.
• We can rely on more than intuition for this: because path costs are additive and step costs are
nonnegative, a loopy path to any given state is never better than the same path with the loop
removed.
• The way to avoid exploring redundant paths is to remember where one has been.
• To do this, we augment the TREE-SEARCH algorithm with a data structure called the explored
53
set (also known as the closed list), which remembers every expanded node.
Constraint Satisfaction Search
• Three ways to represent states and the transitions between them.
– Atomic representation: a state is a black box with no internal structure;
– Factored representation: a state consists of a vector of attribute values; values
can be Boolean, real-valued, or one of a fixed set of symbols.
– Structured representation: a state includes objects, each of which may have
attributes of its own as well as relationships to other objects.
54
…cont’d
• Many important areas of AI are based on factored
representations, including: constraint satisfaction
algorithms, propositional logic, planning Bayesian
networks and the machine learning algorithms.
• A problem is solved when each variable has a value that
satisfies all the constraints on the variable.
• A problem described this way is called a constraint
satisfaction problem, or CSP.
55
Example problem: Map coloring
• Task: coloring each region either red, green, or blue in such a way that no neighboring regions have the same color.
• To formulate this as a CSP,
– we define the variables to be the regions X = {WA,NT, Q,NSW , V,SA, T }
– The domain of each variable is the set Di = {red,green,blue}.
– The constraints require neighboring regions to have distinct colors. Since there are nine places where regions border,
there are nine constraints:
C = {SA ≠ WA,SA ≠ NT, SA ≠ Q, SA ≠ NSW ,SA ≠ V, WA ≠ NT, NT ≠ Q, Q ≠ NSW , NSW ≠ V } .
There are many possible solutions to this
problem, such as
{
WA = red,
NT = green,
Q = red,
NSW = green,
V = red,
SA = blue,
T = red
}. 56