Overview of Problem Solving
Overview of Problem Solving
Problem Solving
Module 2 : Overview to Problem Solving
• 2.1 Solving problem by Searching - Problem Solving Agent, Formulating
Problems, and Example Problems.
• 2.2 Search Methods - Uninformed search, Breadth First Search (BFS), Depth
First Search (DFS), Depth Limited Search, Depth First Iterative Deepening (DFID).
• 2.3 Informed Search Methods - Greedy best first Search, A* Search, Memory
bounded heuristic Search.
• 2.4 Local Search Algorithms and Optimization Problems - Hill climbing
search, Simulated annealing, Local beam search, Genetic algorithms, Ant Colony
Optimization.
AI & ML
By BS 2
Problem-solving Agents :
• Intelligent agents are supposed to maximize their performance measure.
Achieving this is sometimes simplified if the agent can adopt a goal and aim
at satisfying it.
• Goals help organize behavior by limiting the objectives that the agent is trying
to achieve and hence the actions it needs to consider.
• Goal formulation, based on the current situation and the agent’s
performance measure, is the first step in problem solving.
• Problem formulation is the process of deciding what actions and states to
consider, given a goal.
AI & ML
By BS 3
Search algorithms :
• Uninformed search algorithms, algorithms that are given no information
about the problem other than its definition. Although some of these
algorithms can solve any solvable problem, none of them can do so efficiently.
• BFS, DFS
• Informed search algorithms, on the other hand, can do quite well given
some guidance on where to look for solutions.
AI & ML
By BS 4
SEARCH – SOLUTION – EXECUTION :
• The process of looking for a sequence of actions that reaches the goal is called
search.
• A search algorithm takes a problem as input and returns a solution in the form
of an action sequence. Once a solution is found, the actions it recommends can
be carried out. This is called the execution phase. Thus, we have a simple
“formulate, search, execute” design for the agent.
• After formulating a goal and a problem to solve, the agent calls a search
procedure to solve it. It then uses the solution to guide its actions, doing
whatever the solution recommends as the next thing to do—typically, the first
action of the sequence—and then removing that step from the sequence. Once the
solution has been executed, the agent will formulate a new goal.
AI & ML
By BS 5
Well-defined problems and solutions :
• A problem can be defined formally by five components:
1. The initial state that the agent starts in.
2. A description of the possible actions available to the agent.
3. A description of what each action does; the formal name for this is the
transition model.
4. The goal test, which determines whether a given state is a goal state.
Sometimes there is an explicit set of possible goal states, and the test simply checks
whether the given state is one of them.
5. A path cost function that assigns a numeric cost to each path. The problem-
solving agent chooses a cost function that reflects its own performance measure.
AI & ML
By BS 6
Examples : vacuum cleaner
• States: The state is determined by both the agent location and the dirt locations.
The agent is in one of two locations, each of which might or might not contain
dirt. Thus, there are 2 × 22 = 8 possible world states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left,
Right, and Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving
Left in the leftmost square, moving Right in the rightmost square, and Sucking in
a clean square have no effect.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
AI & ML
By BS 7
AI & ML
8
By BS The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck.
Examples : 8-puzzle
• States: A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.
• Initial state: Any state can be designated as the initial state.
Note that any given goal can be reached from exactly half of the possible initial states
• Actions: The simplest formulation defines the actions as movements of the blank space
Left, Right, Up, or Down. Different subsets of these are possible depending on where the
blank is.
• Transition model: Given a state and action, this returns the
resulting state.
• Goal test: This checks whether the state matches the goal
configuration.
• Path cost: Each step costs 1, so the path cost is the number
AI & ML
By BS of steps in the path. 9
Examples : 8-queens problem
• An incremental formulation involves operators that augment the
state description, starting with an empty state; for the 8-queens
problem, this means that each action adds a queen to the state.
AI & ML
By BS 10
Real-world problems :
• The route-finding problem is defined in terms of specified locations and transitions along links
between them.
• States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous segments,
their fare bases, and their status as domestic or international, the state must record extra
information about these “historical” aspects.
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat class, leaving after the current
time, leaving enough time for within-airport transfer if needed.
• Transition model: The state resulting from taking a flight will have the flight’s destination as the
current location and the flight’s arrival time as the current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration
procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.
AI & ML
By BS 11
Real-world problems :
• A really good system should include contingency plans such as backup reservations on
alternate flights to the extent that these are justified by the cost and likelihood of failure of
the original plan.
• Touring problems are closely related to route-finding problems, but with an important
difference. The goal test would check whether the agent is in the final state and has visited
all the states.
• The traveling salesperson problem (TSP) is a touring problem in which each city must be
visited exactly once. The aim is to find the shortest tour.
• A VLSI layout problem requires positioning millions of components and connections on a
chip to minimize area, minimize circuit delays, minimize stray capacitances, and maximize
manufacturing yield.
• Robot navigation is a generalization of the route-finding problem.
• Automatic assembly sequencing of complex objects by a robot.
AI & ML
By BS 12
Searching For Solutions :
• A solution is an action sequence, so search algorithms work by considering various
possible action sequences. The possible action sequences starting at the initial state
form a search tree with the initial state at the root; the branches are actions and
the nodes correspond to states in the state space of the problem.
1. The root node of the tree corresponds to the initial state.
2. Then expand the current state; that is, applying each legal action to the current
state, thereby generating a new set of states.
3. The set of all leaf nodes available for expansion at any given point is called the
frontier.
4. Search algorithms all share this basic structure; they vary primarily according to
how they choose which state to expand next—the so-called search strategy.
AI & ML
By BS 13
Searching For Solutions :
• Loopy paths are a special case of the more general concept of redundant paths,
which exist whenever there is more than one way to get from one state to another.
• Redundant paths are the worst way to get to the same state. In some cases, it is
possible to define the problem itself so as to eliminate redundant paths.
In other cases, redundant paths are unavoidable. This includes all problems where
the actions are reversible, such as route-finding problems and sliding block
puzzles.
• The way to avoid exploring redundant paths is to remember where one has been. To
do this, augment the TREE-SEARCH algorithm with a data structure called the
explored set (also known as the closed list), which remembers every expanded node.
AI & ML
By BS 14
Searching For Solutions :
• The search tree constructed by the GRAPH-SEARCH algorithm contains at most one
copy of each state, so it can thought of as growing a tree directly on the state-space
graph.
• The algorithm has another nice property: the frontier separates the state-space graph
into the explored region and the unexplored region, so that every path from the initial
state to an unexplored state has to pass through a state in the frontier.
AI & ML
By BS 15
Measuring problem-solving performance :
• Completeness: Is the algorithm guaranteed to find a solution when there is
one?
AI & ML
By BS 16
Uninformed search strategies (blind search) :
• The term means that the strategies have no additional information about
states beyond that provided in the problem definition. All they can do is
generate successors and distinguish a goal state from a non-goal state.
• All search strategies are distinguished by the order in which nodes are
expanded.
• Strategies that know whether one non-goal state is “more promising” than
another are called informed search or heuristic search strategies
AI & ML
By BS 17
Breadth-first search :
• Breadth-first search is a simple strategy in which the root node is expanded first,
then all the successors of the root node are expanded next, then their successors, and
so on.
• In general, all the nodes are expanded at a given depth in the search tree before any
nodes at the next level are expanded.
AI & ML
By BS 18
Breadth-first search :
• In breadth-first search, the shallowest unexpanded node is chosen for expansion.
This is achieved very simply by using a FIFO queue for the frontier.
• Thus, new nodes (which are always deeper than their parents) go to the back of
the queue, and old nodes, which are shallower than the new nodes, get expanded
first.
• There is one slight tweak on the general graph-search algorithm, which is that
the goal test is applied to each node when it is generated rather than when it is
selected for expansion.
AI & ML
By BS 19
Breadth-first search – performance :
• Breadth-first search is complete - if the shallowest goal node is at some finite depth d,
breadth-first search will eventually find it after generating all shallower nodes (provided the
branching factor b is finite).
• The shallowest goal node is not necessarily the optimal one. Technically, it is optimal if
the path cost is a non-decreasing function of the depth of the node. The most common
such scenario is that all actions have the same cost.
• Assume an uniform tree where every state has b successors. Suppose that the solution is
at depth d. Then the total number of nodes generated is,
• For space complexity: for any kind of graph search, which stores every expanded node in
the explored set, the space complexity is always within a factor of b of the time complexity.
the space complexity is O(bd), i.e., it is dominated by the size of the frontier.
AI & ML
By BS 20
Uniform-cost search :
• When all step costs are equal, breadth-first search is optimal because it
always expands the shallowest unexpanded node.
• Instead of expanding the shallowest node, uniform-cost search expands the
node n with the lowest path cost g(n). This is done by storing the frontier as a
priority queue ordered by ‘g’.
• In addition to the ordering of the queue by path cost, there are two other
significant differences from breadth-first search.
• The first is that the goal test is applied to a node when it is selected for expansion.
• The second difference is that a test is added in case a better path is found to a
node currently on the frontier.
AI & ML
By BS 21
Uniform-cost search :
• It is easy to see that uniform-cost search is optimal in general.
• First, it is observed that whenever uniform-cost search selects a node n for expansion,
the optimal path to that node has been found.
• Then, because step costs are nonnegative, paths never get shorter as nodes are added.
• These two facts together imply that uniform-cost search expands nodes in order
of their optimal path cost. Hence, the first goal node selected for expansion must
be the optimal solution.
• Completeness is guaranteed provided the cost of every step exceeds some small
positive constant ‘ε’.
AI & ML
By BS 22
Uniform-cost search :
• Uniform-cost search is guided by path costs rather than depths, so its complexity is
not easily characterized in terms of b and d.
• Let C∗ be the cost of the optimal solution, and assume that every action costs at least
‘ε’.
• This is because uniform-cost search can explore large trees of small steps before
exploring paths involving large and perhaps useful steps.
AI & ML
By BS 23
IMP :
When all step costs are the
same, uniform-cost search is
similar to breadth-first
search, except that the latter
stops as soon as it generates
a goal, whereas uniform-cost
search examines all the
nodes at the goal’s depth to
see if one has a lower cost;
thus uniform-cost search
does strictly more work by
expanding nodes at depth d
unnecessarily.
AI & ML
By BS 24
Depth-first search :
• Depth-first search always expands the deepest node in the current frontier of
the search tree.
• The search proceeds immediately to the deepest level of the search tree,
where the nodes have no successors. As those nodes are expanded, they are
dropped from the frontier, so then the search “backs up” to the next deepest
node that still has unexplored successors.
• Depth-first search uses a LIFO queue.
• A LIFO queue means that the most recently generated node is chosen for expansion.
This must be the deepest unexpanded node because it is one deeper than its
parent—which, in turn, was the deepest unexpanded node when it was selected.
AI & ML
By BS 25
Depth-first search – performance :
• The graph-search version,
which avoids repeated states and
redundant paths, is complete in
finite state spaces because it will
eventually expand every node.
• The tree-search version, on the
other hand, is not complete.
• In infinite state spaces, both
versions fail if an infinite non-
goal path is encountered.
• Both versions are non-optimal.
AI & ML
By BS 26
Depth-first search – performance :
• The time complexity of depth-first graph search is bounded by the size of the
state space.
• A depth-first tree search, on the other hand, may generate all of the O(bm) nodes
in the search tree, where m is the maximum depth of any node; this can be much
greater than the size of the state space.
AI & ML
By BS 27
Depth-limited search :
• The embarrassing failure of depth-first search in infinite state spaces can be
alleviated by supplying depth-first search with a predetermined depth limit l.
That is, nodes at depth l are treated as if they have no successors. This
approach is called depth-limited search.
• Unfortunately, it also introduces an additional source of incompleteness if we
choose l < d.
AI & ML
By BS 28
Iterative deepening depth-first search :
• Iterative deepening search (or iterative deepening depth-first search) is a
general strategy, often used in combination with depth-first tree search, that
finds the best depth limit.
• It does this by gradually increasing the limit—first 0, then 1, then 2, and so
on—until a goal is found.
• This will occur when the depth limit reaches d, the depth of the shallowest
goal node.
AI & ML
By BS 29
Iterative deepening depth-first search :
• Iterative deepening search may seem wasteful because states are generated
multiple times.
AI & ML
By BS 30
Iterative deepening depth-first search :
• In an iterative deepening search, the nodes on the bottom level (depth d) are
generated once, those on the next-to-bottom level are generated twice, and so
on, up to the children of the root, which are generated d times. So the total
number of nodes generated in the worst case is,
AI & ML
By BS 31
Iterative deepening depth-first search :
• In general, iterative deepening is the preferred uninformed search method
when the search space is large and the depth of the solution is not known.
AI & ML
By BS 32
Comparing uninformed search strategies :
AI & ML
By BS 33
Informed (Heuristic) Search Strategies :
• In an informed search strategy, one that uses problem-specific knowledge
beyond the definition of the problem itself, solutions can be found more efficiently
than an uninformed strategy.
• Best-first search is an instance of the general TREE-SEARCH or GRAPH-SEARCH
algorithm in which a node is selected for expansion based on an evaluation
function, f(n).
• The evaluation function is construed as a cost estimate, so the node with the
lowest evaluation is expanded first.
• Most best-first algorithms include as a component of f(n) a heuristic function,
denoted h(n):
h(n) = estimated cost of the cheapest path from the state at node n to a goal
state.
AI & ML
By BS 34
Greedy best-first search :
• Greedy best-first search tries to expand the node that is closest to the goal,
on the grounds that this is likely to lead to a solution quickly. Thus, it
evaluates nodes by using just the heuristic function; that is, f(n) = h(n).
AI & ML
By BS 35
Greedy best-first search :
AI & ML
By BS 36
Greedy best-first search :
• Greedy best-first tree search is also incomplete even in a finite state space,
much like depth-first search.
• The graph search version is complete in finite spaces, but not in infinite ones.
• The worst-case time and space complexity for the tree version is O(bm), where
m is the maximum depth of the search space.
AI & ML
By BS 37
A* search:
Minimizing the total estimated solution cost
• The most widely known form of best-first search is called A∗ search. It
evaluates nodes by combining g(n), the cost to reach the node, and h(n), the
cost to get from the node to the goal:
• Since g(n) gives the path cost from the start node to node n, and h(n) is the
estimated cost of the cheapest path from n to the goal, we have,
f(n) = estimated cost of the cheapest solution through n.
• It turns out that this strategy is more than just reasonable: provided that the
heuristic function h(n) satisfies certain conditions, A∗ search is both complete
and optimal.
AI & ML
By BS 38
A* search:
Minimizing the total estimated solution cost
• Find the most cost-effective path to
reach the final state from initial state
using A* Algorithm.
Consider, g(n) = Depth of node and
h(n) = Number of misplaced tiles.
AI & ML
By BS 39
Conditions for optimality:
Admissibility and consistency
• The first condition we require for optimality is that h(n) be an admissible
heuristic.
• An admissible heuristic is one that never overestimates the cost to
reach the goal.
• Because g(n) is the actual cost to reach n along the current path, and
f(n) = g(n) + h(n), we have as an immediate consequence that f(n) never
overestimates the true cost of a solution along the current path through n.
• Admissible heuristics are by nature optimistic because they think the cost of
solving the problem is less than it actually is.
AI & ML
By BS 40
Conditions for optimality:
Admissibility and consistency
• A second, slightly stronger condition called consistency (or sometimes
monotonicity) is required only for applications of A∗ to graph search.
• A heuristic h(n) is consistent if, for every node n and every successor n΄ of n
generated by any action a, the estimated cost of reaching the goal from n is
no greater than the step cost of getting to n΄ plus the estimated cost of
reaching the goal from n΄:
AI & ML
By BS 42
Memory-bounded heuristic search :
• The simplest way to reduce memory requirements for A∗ is to adapt the idea of
iterative deepening to the heuristic search context, resulting in the iterative-
deepening A∗ (IDA∗) algorithm.
• The main difference between IDA∗ and standard iterative deepening is that the
cutoff used is the f-cost (g+h) rather than the depth; at each iteration, the cutoff
value is the smallest f-cost of any node that exceeded the cutoff on the previous
iteration.
• IDA∗ is practical for many problems with unit step costs and avoids the
substantial overhead associated with keeping a sorted queue of nodes.
AI & ML
By BS 43
Recursive best-first search (RBFS) :
• Recursive best-first search (RBFS) is a simple recursive algorithm that attempts
to mimic the operation of standard best-first search, but using only linear space.
• Its structure is similar to that of a recursive depth-first search, but rather than
continuing indefinitely down the current path, it uses the f_limit variable to keep
track of the f-value of the best alternative path available from any ancestor of the
current node.
• If the current node exceeds this limit, the recursion unwinds back to the
alternative path. As the recursion unwinds, RBFS replaces the f-value of each
node along the path with a backed-up value - the best f-value of its children.
AI & ML
By BS 44
Recursive best-first search (RBFS) :
• RBFS is somewhat more efficient than IDA∗, but still suffers from excessive node
regeneration.
• These mind changes occur because every time the current best path is extended,
its f-value is likely to increase - h is usually less optimistic for nodes closer to the
goal. When this happens, the second-best path might become the best path, so
the search has to backtrack to follow it.
• Each mind change corresponds to an iteration of IDA∗ and could require many re-
expansions of forgotten nodes to recreate the best path and extend it one more
node.
AI & ML
By BS 45
Recursive best-first search (RBFS) :
• Like A∗ tree search, RBFS is an optimal algorithm if the heuristic function h(n)
is admissible.
• Its space complexity is linear in the depth of the deepest optimal solution, but
its time complexity is rather difficult to characterize: it depends both on the
accuracy of the heuristic function and on how often the best path changes as
nodes are expanded.
• IDA∗ and RBFS suffer from using too little memory. Between iterations, IDA∗
retains only a single number: the current f-cost limit. RBFS retains more
information in memory, but it uses only linear space: even if more memory were
available.
AI & ML
By BS 46
Simplified Memory-bounded A∗ : SMA*
• It seems sensible, therefore, to use all available memory. The algorithm that can
do this is SMA∗ (simplified memory-bounded A∗). It is simple and proceeds just
like A∗, expanding the best leaf until memory is full.
• At this point, it cannot add a new node to the search tree without dropping an old
one. SMA∗ always drops the worst leaf node—the one with the highest f-value.
• Like RBFS, SMA∗ then backs up the value of the forgotten node to its parent. In
this way, the ancestor of a forgotten subtree knows the quality of the best path in
that subtree.
AI & ML
By BS 47
Simplified Memory-bounded A∗ : SMA*
• What if all the leaf nodes have the same f-value?
• To avoid selecting the same node for deletion and expansion, SMA∗ expands the newest
best leaf and deletes the oldest worst leaf.
• If the leaf is not a goal node, then even if it is on an optimal solution path, that
solution is not reachable with the available memory. Therefore, the node can be
discarded exactly as if it had no successors.
• SMA∗ is complete if there is any reachable solution.
• It is optimal if any optimal solution is reachable; otherwise, it returns the best
reachable solution.
• Memory limitations can make a problem intractable from the point of view of
computation time.
AI & ML
By BS 48
Local Search Algorithms And Optimization Problems :
• The search algorithms seen so far are designed to explore search spaces
systematically. When a goal is found, the path to that goal also constitutes a
solution to the problem.
• When the path to the goal does not matter, a different class of algorithms
called as Local search algorithms, ones that do not worry about paths at all
are to be considered.
• Local search algorithms operate using a single current node (rather than
multiple paths) and generally move only to neighbors of that node. Typically,
the paths followed by the search are not retained.
AI & ML
By BS 49
Local Search Algorithms And Optimization Problems :
• Although local search algorithms are not systematic, they have two key
advantages:
1. They use very little memory—usually a constant amount; and
2. They can often find reasonable solutions in large or infinite (continuous)
state spaces for which systematic algorithms are unsuitable.
• In addition to finding goals, local search algorithms are useful for solving
pure optimization problems, in which the aim is to find the best state
according to an objective function. Many optimization problems do not fit
the “standard” search model.
AI & ML
By BS 50
One-dimensional state-space landscape :
• A landscape has both “location” (defined by the state) and “elevation” (defined by the
value of the heuristic cost function or objective function).
• If elevation corresponds to cost, then the
aim is to find the lowest valley - a global
minimum;
• If elevation corresponds to an objective
function, then the aim is to find the
highest peak - a global maximum.
• A complete local search algorithm
always finds a goal if one exists;
• An optimal algorithm always finds a
global minimum/maximum.
AI & ML
By BS 51
Hill-climbing search :
• The hill-climbing search algorithm is simply a loop that continually moves in the
direction of increasing value, that is, uphill. It terminates when it reaches a
“peak” where no neighbor has a higher value.
• The algorithm does not maintain a search tree, so the data structure for the
current node need only record the state and the value of the objective function.
Hill climbing does not look ahead beyond the immediate neighbors of the current
state.
AI & ML
By BS 52
Hill-climbing search :
• Hill climbing is sometimes called greedy local search because it grabs a good
neighbor state without thinking ahead about where to go next.
• Hill climbing often makes rapid progress toward a solution because it is
usually quite easy to improve a bad state.
• Unfortunately, hill climbing often gets stuck for the following reasons:
• Local maxima: a local maximum is a peak that is higher than each of its
neighboring states but lower than the global maximum.
• Ridges: Ridges result in a sequence of local maxima that is
very difficult for greedy algorithms to navigate.
• Plateaux: a plateau is a flat area of the state-space landscape.
A hill-climbing search might get lost on the plateau.
AI & ML
By BS 53
Hill-climbing search :
• Many variants of hill climbing have been invented.
• Stochastic hill climbing chooses at random from among the uphill moves; the
probability of selection can vary with the steepness of the uphill move. This
usually converges more slowly than steepest ascent, but in some state
landscapes, it finds better solutions.
• First-choice hill climbing implements stochastic hill climbing by generating
successors randomly until one is generated that is better than the current state.
This is a good strategy when a state has many (e.g., thousands) of successors.
• The above methods often fail to find a goal when one exists because they can
get stuck on local maxima.
AI & ML
By BS 54
Hill-climbing search :
• Random-restart hill climbing adopts the well-known adage, “If at first you don’t
succeed, try, try again.”
• The success of hill climbing depends very much on the shape of the state-space
landscape: if there are few local maxima and plateau, random-restart hill
climbing will find a good solution very quickly.
AI & ML
By BS 55
Simulated annealing :
• A hill-climbing algorithm that never makes “downhill” moves toward states with
lower value (or higher cost) is guaranteed to be incomplete, because it can get
stuck on a local maximum.
• Therefore, it seems reasonable to try to combine hill climbing with a random walk
in some way that yields both efficiency and completeness. Simulated annealing
is such an algorithm.
AI & ML
By BS 56
Simulated annealing :
• In metallurgy, annealing is the process used to temper or harden metals and
glass by heating them to a high temperature and then gradually cooling them,
thus allowing the material to reach a low energy crystalline state.
• Consider a ping-pong ball, roll it and let it come to rest at a local minimum. If the
surface shaken, the ball can bounce out of the local minimum. The trick is to
shake just hard enough to bounce the ball out of local minima but not hard enough
to dislodge it from the global minimum.
AI & ML
By BS 57
Simulated annealing :
• The innermost loop of the simulated-annealing algorithm is quite similar to hill climbing.
Instead of picking the best move, however, it picks a random move. If the move improves
the situation, it is always accepted. Otherwise, the algorithm accepts the move with some
probability less than 1.
• The probability decreases exponentially with the “badness” of the move—the amount ΔE by
which the evaluation is worsened. The probability also decreases as the “temperature” T
goes down:
• “bad” moves are more likely to be allowed at the start when T is high, and they become more
unlikely as T decreases.
• If the schedule lowers T slowly enough, the algorithm will find a global optimum with
probability approaching 1.
AI & ML
By BS 58
Local beam search :
• Keeping just one node in memory might seem to be an extreme reaction to the
problem of memory limitations. The local beam search algorithm keeps track of
k states rather than just one.
• It begins with k randomly generated states. At each step, all the successors of all
k states are generated. If any one is a goal, the algorithm halts. Otherwise, it
selects the k best successors from the complete list and repeats.
• In a local beam search, useful information is passed among the parallel search
threads. The algorithm quickly abandons unfruitful searches and moves its
resources to where the most progress is being made.
AI & ML
By BS 59
Local beam search :
• In its simplest form, local beam search can suffer from a lack of diversity
among the k states - they can quickly become concentrated in a small
region of the state space, making the search little more than an expensive
version of hill climbing.
AI & ML
By BS 60
Genetic algorithms :
• A genetic algorithm (or GA) is a variant of stochastic beam search in which
successor states are generated by combining two parent states rather than by
modifying a single state.
• If parents have better fitness, their offspring will be better than parents and have
a better chance at surviving.
AI & ML
By BS 61
5 phases in GA :
• Initial population : The process begins with a set of individuals which is called
a Population. Each individual is a solution to the problem to be solved.
• An individual is characterized by a set of parameters (variables) known as Genes.
Genes are joined into a string to form a Chromosome (solution).
• Fitness function : The fitness
function determines how fit an individual is
(the ability of an individual to compete with
other individuals). It gives a fitness score to
each individual. The probability that an
individual will be selected for reproduction is
based on its fitness score.
AI & ML
By BS 62
5 phases in GA :
• Selection : The idea of selection phase is to select the fittest individuals and let them
pass their genes to the next generation.
• Two pairs of individuals (parents) are selected based on their fitness scores.
Individuals with high fitness have more chance to be selected for reproduction.
AI & ML
By BS 63
5 phases in GA :
• Mutation : In certain new offspring formed, some of their genes can be subjected
to a mutation with a low random probability. This implies that some of the bits
in the bit string can be flipped.
AI & ML
By BS 64
Example of GA :
AI & ML
By BS 65
Ant Colony optimisation :
• Proposed by Marco Dorigo in 1991
• Inspired in the behavior of real ants Multi-agent approach for solving complex
combinatorial optimization problems
• Applications:
• Traveling Salesman Problem
• Scheduling Network Model Problem
• Vehicle routing
• Advantages
• Can be used in dynamic applications
• Positive Feedback leads to rapid discovery of good solutions
• Distributed computation avoids premature convergence
• Disadvantages
• Convergence is guaranteed, but time to convergence uncertain
• Coding is not straightforward
AI & ML
By BS 66
Pheromone Trails :
• Individual ants lay pheromone trails while travelling from the nest, to the nest or
possibly in both directions.
• The pheromone trail gradually evaporates over time.
• But pheromone trail strength accumulate with multiple ants using path.
Food source
Nest
AI & ML
By BS 67
Ant Colony optimisation :
• Ants are agents that: Move along between nodes in a graph.
• They choose where to go based on pheromone strength (and maybe other
things)
• An ant’s path represents a specific candidate solution.
• When an ant has finished a solution, pheromone is laid on its path, according
to quality of solution.
• This pheromone trail affects behaviour of other ants by ‘stigmergy’ …
AI & ML
By BS 68
Example : A 4-city TSP
• Initially, random levels of pheromone are scattered on the edges
A B
• Let
• AB: 10,
• AC: 10,
• AD: 30,
• BC: 40,
• CD: 20
• BD: 10 D
C
AI & ML
By BS Pheromone Ant 69
The ACO algorithm for the TSP :
We have a TSP, with n cities.
1. We place some ants at each city. Each ant then does this:
• It makes a complete tour of the cities, coming back to its starting city, using a
transition rule to decide which links to follow. By this rule, it chooses each next-city at
random, but biased partly by the pheromone levels existing at each path, and biased
partly by heuristic information.
AI & ML
By BS 70
AI & ML
By BS 71
AI & ML
By BS 72
EBPN : Problem
• Using back-propagation network, find the new
weights for the net shown in Figure. It is
presented with the input pattern [0, 1] and
the target output is 1.
• Use a learning rate = 0.25 and binary
sigmoidal activation function.
• The initial weights are given as,
• [w11 w21 b1] = [0.6 -0.1 0.3]
• [w12 w22 b2] = [-0.3 0.4 0.5]
• [v11 v21 b3] = [0.4 0.1 -0.2]
AI & ML
By BS
EBPN : Problem . . .
1
• The binary sigmoidal activation function is given as, f ( x) =
1 + exp(− ax)
I. Calculate the net input to hidden neurons and apply activation fun,
zin1 = 0.2 and zin 2 = 0.9 z1 = 0.5498 and z 2 = 0.7109
II. Calculate the net input to the output neuron, y = 0.09101
in
y = f ( yin ) = 0.5227
III. Compute the error portion, “ k”,
k = (t k − yk ) f ' ( yink ) k = 0.11907
For binary sigmoidal activation function,
f ' ( yin ) = f ( yin )1 − f ( yin )
y ' = y [1 − y ] = 0.2495
AI & ML
By BS
EBPN : Problem . . .
IV. Find the change of weights between hidden and output layer,
AI & ML
By BS