Local search and optimization
• Local search
– Keep track of single current state
– Move only to neighboring states
– Ignore paths
• Advantages:
– Use very little memory
– Can often find reasonable solutions in large or infinite (continuous) state
spaces.
• “Pure optimization” problems
– All states have an objective function
– Goal is to find state with max (or min) objective value
– Does not quite fit into path-cost/goal-state formulation
– Local search can do quite well on these problems.
Hill-climbing search
• “a loop that continuously moves towards increasing value”
– terminates when a peak is reached
– Aka greedy local search
• Value can be either
– Objective function value
– Heuristic function value (minimized)
• Hill climbing does not look ahead of the immediate neighbors
• Can randomly choose among the set of best successors
– if multiple have the best value
Hill-climbing (Greedy Local Search)
max version
function HILL-CLIMBING( problem) return a state that is a local maximum
input: problem, a problem
local variables: current, a node.
neighbor, a node.
current MAKE-NODE(INITIAL-STATE[problem])
loop do
neighbor a highest valued successor of current
if VALUE [neighbor] ≤ VALUE[current] then return STATE[current]
current neighbor
min version will reverse inequalities and look for lowest valued
successor
“Landscape” of search
Hill Climbing gets stuck in local minima
depending on?
Hill Climbing Drawbacks
• Local maxima: a local maximum is a peak that is higher than each of its
neighboring states but lower than the global maximum. Hill-climbing
algorithms that reach the vicinity of a local maximum will be drawn upward
toward the peak but will then be stuck with nowhere else to go.
• Plateaux: a plateau is a flat area of the state-space landscape. It can be a flat
local maximum, from which no uphill exit exists. A hill-climbing search
might get lost on the plateau.
• Ridges: Ridges result in a sequence of local maxima that is very difficult for
greedy algorithms to navigate
Hill Climbing Drawbacks
• Local maxima
• Plateaus
• Ridges
Variants of hill climbing
• Stochastic hill climbing chooses at random from among the uphill moves;
the probability of selection can vary with the steepness of the uphill move.
This usually converges more slowly than steepest ascent, but in some state
landscapes, it finds better solutions.
• First-choice hill climbing implements stochastic hill climbing by generating
successors randomly until one is generated that is better than the current
state. This is a good strategy when a state has many (e.g., thousands) of
successors.
• Random-restart hill climbing adopts the well-known adage, “If at first you
don’t succeed, try, try again.” It conducts a series of hill-climbing searches
from randomly generated initial states, until a goal is found.
Simulated annealing
• A hill-climbing algorithm that never makes “downhill” moves toward states with
lower value (or higher cost) is guaranteed to be incomplete, because it can get stuck
on a local maximum.
• In contrast, a purely random walk—that is, moving to a successor chosen uniformly
at random from the set of successors—is complete but extremely inefficient.
• Therefore, it seems reasonable to try to combine hill climbing with a random walk
in some way that yields both efficiency and completeness.
• Simulated annealing is such an algorithm
Simulated Annealing
• A Physical Analogy:
• imagine letting a ball roll downhill on the function surface
– this is like hill-climbing (for minimization)
• now imagine shaking the surface, while the ball rolls,
gradually reducing the amount of shaking
– this is like simulated annealing
• Annealing = physical process of cooling a liquid or metal
until particles achieve a certain frozen crystal state
• simulated annealing:
– free variables are like particles
– seek “low energy” (high quality) configuration
– slowly reducing temp. T with particles moving around randomly
Local beam search
• Keep track of k states instead of one
– Initially: k randomly selected states
– Next: determine all successors of k states
– If any of successors is goal → finished
– Else select k best from successors and repeat
Local Beam Search (contd)
• Not the same as k random-start searches run in parallel!
• Searches that find good states recruit other searches to join them
• Problem: quite often, all k states end up on same local hill
• Idea: Stochastic beam search
– Choose k successors randomly, biased towards good ones