Local Search Algorithms
• They operate using single state. (rather than multiple paths).
• They generally move to neighbours of the current state.
• There is no such requirement of maintaining paths in
memory.
• They are not "Systematic" algorithm procedure.
• The main advantages of local search algorithm are
• 1) They use very little and constant amount of memory.
• 2) They have ability to find reasonable solution for infinite
state spaces (for which systematic algorithms are unsuitable).
• Local search algorithms are useful for solving pure
optimization problems. In pure optimization problems main
aim is to find the best state according to required objective
function.
Hill Climbing Search
• • This algorithm generally moves up in the direction of
increasing value that is-uphill. It breaks its "moving up
loop" when it reaches a "peak" where no neighbour
has a higher value.
• • It does not maintain a search tree. It stores current
node data structure. This node records the state and its
objective function value. Algorithm only look out for
immediate neighbours of current state.
• • It is similar to greedy local search in a sense that it
considers a current good neighbor state without
thinking ahead.
• • Greedy algorithm works very well as it is very easy to
improve bad state in hill climbing.
Algorithm for Hill Climbing
• The algorithm for hill climbing is as follows: -
1) Evaluate the initial state. If it is goal state quit,
otherwise make current state as initial state.
2) Select a new operator that could be applied to this
state and generate a new state.
3) Evaluate the new state. If this new state is closer to
the goal state than current state make the new state
as the current state. If it is not better, ignore this state
and proceed with the current state.
4) If the current state is goal state or no new
operators are available, quit. Otherwise repeat from
2.
Problems with Hill Climbing
• 1) Local maxima - Can't see higher peak.
• 2) Shoulder - Can't see the way out.
• Local maxima- It is a state where we have climbed
to the top of the hill, and missed on better solution.
• It is the mountain - A state that is better than all of
its neighbours, but not better than some other
states further away. [Shown in Fig. 4.2.3]
Plateau: It is a state where everything around is about
as good as where we are currently. In other words a
flat area of the search space in which all neighbouring
states have the same value.
Ridges: In this state we are on a ridge leading up, but
we can't directly apply an operator to improve the
situation, so we have to apply more than one
operator to get there.
Solving Problems Associated with Hill Climbing
All the above discussed problems could be solved
using methods like backtracking, making big jumps (to
handle plateaus or poor local maxima), applying
multiple rules before testing (helps with ridges) etc.
Hill climbing is best suited to problem where the
heuristic gradually improves, the closer it gets to the
solution; it works poorly where there are sharp drop-
offs. It assumes that local improvement will lead to
global improvement.
Steepest ascent hill climbing
This algorithm differs from the basic Hill climbing algorithm by choosing
the best successor rather than the first successor that is better. This
indicates that it has elements of the breadth first algorithm.
• Steepest ascent Hill climbing algorithm
• 1. Evaluate the initial state.
• 2. If it is goal state then quit otherwise make the current state this initial
state and proceed;
• 3. Repeat set target to be the state that any successor of the current
state can better; for each operator that can be applied to the current
state apply the new operator and create a new state evaluate this state.
• 4. If this state is goal state Then quit. Otherwise compare with Target. If
better set Target to this value. If Target is better than current state set
current state to Target. Until a solution is found or current state does
not change.
• Both the basic and this method of hill climbing may fail to find a solution
by reaching a state from which no subsequent improvement can be
made and this state is not the solution
What is Simulated Annealing?
Simulated Annealing is an optimization algorithm designed to
search for an optimal or near-optimal solution in a large
solution space. The name and concept are derived from the
process of annealing in metallurgy, where a material is
heated and then slowly cooled to remove defects and
achieve a stable crystalline structure. In Simulated Annealing,
the "heat" corresponds to the degree of randomness in the
search process, which decreases over time (cooling schedule)
to refine the solution. The method is widely used in
combinatorial optimization, where problems often have
numerous local optima that standard techniques like gradient
descent might get stuck in. Simulated Annealing excels in
escaping these local maxima by introducing controlled
randomness in its search, allowing for a more thorough
The algorithm
1) Start with evaluating the initial state.
2) Apply each operator and loop until a goal state is
found or till no new operators left to be applied as
described below:
i) Set T according to an annealing schedule.
ii) Select and apply a new operator.
iii) Evaluate the new state. If it is a goal state quit.
ΔE = Val (current state) - Val (new state)
If ΔE < 0 then this is the new current state.
Else find a new current state with probability e - E /
KT.
Constraint Satisfaction
• Constraint satisfaction is a search procedure that operates in a space of
constraint sets. The initial state contains the constraints that are
originally given in the problem description.
– A goal state is any state that has been constrained “enough” where
“enough” must be defined for each problem.
– For example, in cryptarithmetic problems, enough means that each
letter has been assigned a unique numeric value.
– Constraint Satisfaction problems in AI have goal of discovering some
problem state that satisfies a given set of constraints.
– Design tasks can be viewed as constraint satisfaction problems in
which a design must be created within fixed limits on time, cost, and
materials.
– Constraint Satisfaction is a two-step process:
• First constraints are discovered and propagated as far as possible
throughout the system.
• Then if there is still not a solution, search begins. A guess about
• Example: Cryptarithmetic Problem
• Constraints:
• No two letters have the same value
• The sums of the digits must be as
shown in the problem Goal State:
–All letters have been assigned a digit
in such a way that all the initial
constraints are satisfied
• Input State Solution: