Chapter 3
Solving Problems by Searching
and Constraint Satisfaction
Problem
1
Introduction
an agent can act by establishing goals
and considering sequences of actions that
might achieve those goals.
A goal and a set of means for achieving
the goal is called a problem, and the
process of exploring what the means can
do is called search.
A problem is really a collection of
2
information that the agent will use to
decide what to do.
Problem Solving Agents
Problem-solving agents is one kind of
goal-based agent that decide what to do
by finding sequences of actions that lead
to desirable states.
Intelligent agents are supposed to act in such a
way that the environment goes through a
sequence of states that maximizes the
3
performance measure.
Cont’d
Goal formulation, based on the current
situation, is the first step in problem solving.
As well as formulating a goal, the agent may
wish to decide on some other factors that
affect the desirability of different ways of
achieving the goal.
Actions can be viewed as causing transitions
between world states, so obviously the agent
has to find out which actions will get it to a
goal state.
Before it can do this, it needs to decide what
4
sorts of actions and states to consider.
Cont’d
Problem formulation is the process of deciding
what actions and states to consider, and follows
goal formulation.
The agent will not know which of its possible
actions is best, because it does not know enough
about the state that results from taking each action.
If the agent has no additional knowledge, then it is
stuck. The best it can do is choosing one of the
actions at random.
There are different amounts of knowledge that an
agent can have concerning its actions and the state
that it is in.
5
This depends on how the agent is connected to its
environment through its percepts and actions.
Cont’d
We find that there are four essentially different
types of problems
1.Single state problems,
In single state problem the state is always
known with certainty.
First, suppose that the agent's sensors give it
enough information to tell exactly which state it
is in (i.e., the world is accessible); and suppose
that it knows exactly what each of its actions
does.
Then it can calculate exactly which state it will
6
be in after any sequence of actions.
Cont’d
2. multiple-state problems
Suppose that the robot has no sensor that
can tell it which room it is in and it doesn't
know where it is initially.
Then it must consider sets of possible
states.
Multi state problems: know which states
might be in.
7
Cont’d
3.contingency problems
Obviously, the agent does have a way to solve the problem
starting from one of {1,3}: first suck, then move right, then
suck only if there is dirt there.
Thus, solving this problem requires sensing during the
execution phase.
Notice that the agent must now calculate a whole tree of
actions, rather than a single action sequence. In general,
each branch of the tree deals with a possible contingency
that might arise.
For this reason, we call this a contingency problem. Many
problems in the real, physical world are contingency
problems, because exact prediction is impossible. For this
reason, many people keep their eyes open while walking
8
around or driving.
A contingency problem is constructed plans with
Cont’d
4.Exploration problems
suppose the agent has a map of Romania, either on paper or
in its memory.
The point of a map is to provide the agent with information
about the states it might get itself into, and the actions it can
take.
The agent can use this information to consider subsequent
stages of a hypothetical journey through each of the towns,
to try to find a journey that eventually gets to goal by
carrying out the driving actions.
In general, an agent with several immediate options of
unknown value can decide what to do by first examining
different possible sequences of actions that lead to states of
known value, and then choosing the best one.
9
This process of looking for such a sequence is called search.
A search algorithm takes a problem as input and returns a
WELL-DEFINED PROBLEMS AND
SOLUTIONS
A problem is really a collection of information
that the agent will use to decide what to do.
We have seen that the basic elements of a
problem definition are the states and actions.
To capture these formally, we need the following:
The initial state is that the agent knows itself
to be in.
The set of possible actions available to the
agent and the term operator is used to denote
the description of an action in terms of which
state will be reached by carrying out the action
10
in a particular state.
CONT’D
(An alternate formulation uses a successor function S.
Given a particular state x, S(x) returns the set of states
reachable from x by any single action.)
Together, these define the state space of the problem:
the set of all states reachable from the initial state by
any sequence of actions.
A path in the state space is simply any sequence of
actions leading from one state to another.
The goal test which the agent can apply to a single
state description to determine if it is a goal state.
Sometimes there is an explicit set of possible goal
states and the test simply checks to see if we have
reached one of them.
Sometimes the goal is specified by an abstract property
11
rather than an explicitly enumerated set of states.
CONT’D
For example, in chess, the goal is to reach a
state called "checkmate," where the opponent's
king can be captured on the next move no
matter what the opponent does. Finally, it may
be the case that one solution is preferable to
another, even though they both reach the goal.
For example, we might prefer paths with fewer
or less costly actions.
A path cost function is a function that assigns
a cost to a path.
In all cases we will consider, the cost of a path
is the sum of the costs of the individual actions
12
along the path.
SEARCH STRATEGIES
The majority of work in the area of search has
gone into finding the right search strategy for
a problem.
In our study of the field we will evaluate
strategies in terms of four criteria:
Completeness: is the strategy guaranteed to
find a solution when there is one?
Time complexity: how long does it take to find
a solution?
Space complexity: how much memory does it
need to perform the search?
Optimality: does the strategy find the highest-
13
quality solution when there are several different
CONT’D
There are some search strategies that come under the
heading of uninformed search.
The term means that they have no information about the
number of steps or the path cost from the current state to
the goal all they can do is distinguish a goal state from a
non goal state.
Uninformed search is also sometimes called blind search.
Strategies that use such considerations are called informed
search strategies or heuristic search strategies.
Uninformed search is less effective than informed search.
Uninformed search is still important, however, because
there are many problems for which there is no additional
information to consider.
The six uninformed search strategies are distinguished by
14
the order in which nodes are expanded. It turns out that this
difference can matter a great deal, as we shall shortly see.
BREADTH-FIRST SEARCH
One simple search strategy is a breadth-first
search.
In this strategy, the root node is expanded first, then
all the nodes generated by the root node are
expanded next, and then their successors, and so
on.
In general, all the nodes at depth d in the search
tree are expanded before the nodes at depth d+ 1.
Breadth-first search can be implemented by calling
the general-search algorithm with a queuing function
that puts the newly generated states at the end of
the queue, after all the previously generated states:
Function breadth-first-search (problem) returns a
15
solution or failure
Return general-search ( problem, ENQUEUE- AT-END).
UNIFORM COST SEARCH
Breadth-first search finds the shallowest goal
state, but this may not always be the least-cost
solution for a general path cost function.
Uniform cost search modifies the breadth-
first strategy by always expanding the lowest-
cost node on the fringe (as measured by the
path cost g(n)), rather than the lowest-depth
node.
It is easy to see that breadth-first search is just
uniform cost search with g(n) = depth(n).
When certain conditions are met, the first
solution that is found is guaranteed to be 16the
cheapest solution, because if there were a
DEPTH-FIRST SEARCH
Depth-first search always expands one of the
nodes at the deepest level of the tree.
Only when the search hits a dead end (a none
goal node with no expansion) does the search
go back and expand nodes at shallower levels.
This strategy can be implemented by general-
search with a queuing function that always puts
the newly generated states at the front of the
queue. Because the expanded node was the
deepest, its successors will be even deeper and
are therefore now the deepest.
17
CONT’D
Depth-first search has very modest memory
requirements.
It needs to store only a single path from the root to a
leaf node, along with the remaining unexpanded
sibling nodes for each node on the path.
The time complexity for depth-first search is O(bm).
For problems that have very many solutions, depth-
first may actually be faster than breadth-first,
because it has a good chance of finding a solution
after exploring only a small portion of the whole
space.
Breadth-first search would still have to look at all the
paths of length d - 1 before considering any of length
18
d.
Depth-first search is still O(bm) in the worst case.
DEPTH-LIMITED SEARCH
Depth-limited search avoids the pitfalls
of depth-first search by imposing a cutoff
on the maximum depth of a path.
This cutoff can be implemented with a
special depth-limited search algorithm, or
by using the general search algorithm with
operators that keep track of the depth.
The time and space complexity of depth-
limited search is similar to depth-first
search.
It takes O(bl ) time and O(bl) space, where
19
l is the depth limit.
ITERATIVE DEEPENING SEARCH
Iterative deepening search is a strategy that
side steps the issue of choosing the best depth
limit by trying all possible depth limits:
first depth 0, then depth 1, then depth 2, and so
on.
In effect, iterative deepening combines the
benefits of depth-first and breadth-first search.
It is optimal and complete, like breadth-first
search, but has only the modest memory
requirements of depth-first search.
The order of expansion of states is similar to
breadth-first, except that some states 20are
expanded multiple times.
BIDIRECTIONAL SEARCH
The idea behind bidirectional search is to
simultaneously search both forward from
the initial state and backward from the
goal, and stop when the two searches
meet in the middle.
21
COMPARING SEARCH STRATEGIES
Compares the six search strategies in terms of the four
evaluation criteria set forth 22
When to use what
Breadth-First Search:
Some solutions are known to be shallow
Uniform-Cost Search:
Actions have varying costs
Least cost solution is the required
This is the only uninformed search that worries about costs.
Depth-First Search:
Many solutions exist
Know (or have a good estimate of) the depth of
solution
Iterative-Deepening Search:
Space is limited and the shortest solution path is
23
required
AVOIDING REPEATED STATES
We have all but ignored one of the most important
complications to the search process: the possibility
of wasting time by expanding states that have
already been encountered and expanded before on
some other path.
For some problems, this possibility never comes up;
each state can only be reached one way.
The efficient formulation of the 8-queens problem is
efficient in large part because of this each state can
only be derived through one path.
For many problems, repeated states are
unavoidable.
This includes all problems where the operators24are
reversible, such as route-finding problems and the
CONT’D
The search trees for these problems are infinite,
but if we prune some of the repeated states, we
can cut the search tree down to finite size,
generating only the portion of the tree that
spans the state space graph.
Even when the tree is finite, avoiding repeated
states can yield an exponential reduction in
search cost.
The space contains only m + 1 states, where m
is the maximum depth.
Because the tree includes each possible path
through the space, it has 2m branches. 25
CONT’D
There are three ways to deal with repeated states, in
increasing order of effectiveness and computational
overhead:
Do not return to the state you just came from. Have the
expand function (or the operator set) refuse to generate
any successor that is the same state as the node's
parent.
Do not create paths with cycles in them. Have the
expand function (or the operator set) refuse to generate
any successor of a node that is the same as any of the
node's ancestors.
Do not generate any state that was ever generated
before.
This requires every state that is generated to be kept
26
in
memory, resulting in a space complexity of O(bd),
CONSTRAINT SATISFACTION SEARCH
A constraint satisfaction problem (or CSP) is a special kind of
problem that satisfies some additional structural properties beyond
the basic requirements for problems in general.
In a CSP, the states are defined by the values of a set of
variables and the goal test specifies a set of constraints that the
values must obey.
For example, the 8-queens problem can be viewed as a CSP in which
the variables are the locations of each of the eight queens; the
possible values are squares on the board; and the constraints state
that no two queens can be in the same row, column or diagonal.
A solution to a CSP specifies values for all the variables such that
the constraints are satisfied.
Crypt arithmetic can also be described as CSPs.
Many kinds of design and scheduling problems can be expressed
as CSPs, so they form a very important subclass.
CSPs can be solved by general-purpose search algorithms, 27 but
because of their special structure, algorithms designed specifically
for CSPs generally perform much better.
CONT’D
Constraints come in several varieties. Unary constraints
concern the value of a single variable.
For example, the variables corresponding to the
leftmost digit on any row of a crypt arithmetic puzzle
are constrained not to have the value 0.
Binary constraints relate pairs of variables. The
constraints in the 8-queens problem are all binary
constraints. Higher-order constraints involve three or
more variables—for example, the columns in the crypt
arithmetic problem must obey an addition constraint
and can involve several variables.
Finally, constraints can be absolute constraints,
violation of which rules out a potential solution, or
preference constraints that say which solutions 28 are
preferred.
CONT’D
Sudoku
•variables are cells
• domain of each variable is {1,2,3,4,5,6,7,8,9}
29
• constraints: rows, columns, boxes contain all different
numbers
Question ?
30
Quiz (5pt)?
1. Define problem(2pt)?
2. Write two elements of problem(1pt)?
3. What is blind search and give example
(2pt) ?
31