Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views73 pages

Intro To AI - Mod 2 Part 1

Module 2_Part I_Uninformed Search Algorithms
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views73 pages

Intro To AI - Mod 2 Part 1

Module 2_Part I_Uninformed Search Algorithms
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Module 2

Search Algorithms
Contents
Uninformed search Algorithms: Breadth-First Search,
Uniform cost search, Depth-First Search.
Informed search Algorithms: Best First Search, A*, AO*,
Hill Climbing, Generate & Test, Alpha-Beta pruning,
Min-max search.
Introduction
The aim of AI rational agent is to solve a problem without
intervention of the human.
To solve the problem:
1. There is a need to represent a problem. This representation is
called formalization.
2. Agent uses some strategies to solve the problem.
This strategy is searching technique.
Problem formulation
This step defines the problem which will help to understand and
decide the course of action that needs to be considered to achieve
the goal.

Every problem should be properly formulated before applying any


search algorithms, because every algorithm demands problem in
specific form.
Components of the problem:
Problem statement:
What is to be done?
Why it is important to build AI system?
What are the advantages of the proposed system?
Example: Predict whether or not the patient having diabetes or not.
Goal or solution: Can use some machine learning techniques to solve
this problem.
Solution space: Every alternative ways with which the problem can be
solved is known as solution space.
Operators: Actions taken during solving the problem.
Examples of problem formulation
Problem Statement: Mouse is hungry,
and needs to consume cheese placed in
the environment.
Problem solution: Can use search
algorithms to find the shortest path.
Solution space: Multiple paths possible.
Operators: UP, DOWN, RIGHT and LEFT.
2. Search Strategies
Important Terminologies:
• Search: Searching is a step by step procedure to solve a search-problem in a
given search space. A search problem can have three main factors:
• Search Space: Search space represents a set of possible conditions and solutions.
• Start State: It is a state from where agent begins the search.
• Goal state: Ultimate aim of searching process.
• Search tree: A tree representation of search space, showing possible
solutions from initial state.
• Actions: It gives the description of all the available actions to the agent.
• Transition model: A description of what each action do.
• Path Cost: It is a function which assigns a numeric cost to each path.
• Solution: It is an action sequence which leads from the start node to the
goal node.
• Optimal Solution: If a solution has the lowest cost among all solutions.
Example
• States: The state is determined by both the agent location and the dirt locations. The
agent is in one of two locations, each of which might or might not contain dirt.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in
the leftmost square, moving Right in the rightmost square, and Sucking in a clean square
have no effect.
• Goal test: This checks whether all the
squares are clean.
• Path cost: Each step costs 1, so the path
cost is the number of steps in the path
The 8-puzzle
• States: A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given goal
can be reached from exactly half of the possible initial states.
• Actions: The simplest formulation defines the actions as movements of the blank space
Left, Right, Up, or Down. Different subsets of these are possible depending on where
the blank is.
• Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank
switched.
• Goal test: This checks whether the state matches the goal configuration.
• Path cost: Each step costs 1, so the path cost is
the number of steps in the path.
Properties of Search Algorithms
Measuring searching performance:

Completeness: A search algorithm is said to be complete if it


guarantees to return a solution for any random input.
Optimality: best solution among all other solutions, then such a
solution for is said to be an optimal solution.
Time Complexity: Time taken for an algorithm to complete its task.
Space Complexity: It is the maximum storage space required at any
point during the search, as the complexity of the problem.
AO* Search

Hill climbing

Generate & Test

Alpha – beta
Min-max search
pruning
Uninformed search
‘Uninformed Search’ means the machine blindly follows the algorithm
regardless of whether right or wrong, efficient or inefficient.
These algorithms are brute force operations, and they don’t have extra
information about the search space; the only information they have is on
how to traverse or visit the nodes in the tree.

Thus uninformed search algorithms are also called blind search


algorithms.
The search algorithm produces the search tree without using any
domain knowledge, which is a brute force in nature.
They don’t have any background information on how to approach the
goal or whatsoever. But these are the basics of search algorithms in AI.
Breadth-First Search(BFS)
BFS is a simple strategy in which the root node is expanded first
then all the successors of the root node are expanded next,
then their successors, and so on.

In general, all the nodes are expanded at a given depth in the search tree
before any nodes at the next level are expanded.
It begins searching from the root node ( any random node can be
considered as root node ) and expands the successor node before going
expanding it further expands along breadthwise and traverses those nodes
rather than searching depth-wise.
Pictorial representation of the BFS
working
Given a hill, our goal is to simply find
gold.

Breadth-first search has no prior


knowledge of the whereabouts of the gold
so the robot simply digs 1 foot deep along
the 10-foot strip if it doesn't find any gold,
it digs 1 foot deeper.
Graph Example
It starts from the root node A
and then traverses node B.
We expand the other child of A i.e. node C
because of BFS.

From C visit G and then move to the next


level and traverse from D to G
From G move to H, and then from H to K in
this typical example.
To traverse here we have only taken into
consideration the lexicographical order.
This is how the BFS Algorithm is
implemented.
Note: All nodes in the graph is represented
in the capital letters in the explanation
Algorithm
Step 1: Initialize queue with start vertex and mark this vertex as
visited.
Step 2: While queue is not empty
Delete a vertex u from queue
Identify all the vertices v adjacent to u
If the vertex adjacent to u are not visited
mark them as visited
Insert all the marked vertices into queue
Output u, v
Solved example
Advantages and Disadvantages
Advantages:
• BFS will definitely provide the solution, if the solution exist, hence
provides high level of accuracy.
• If there is more than one solution for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
Disadvantages:
• It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
Solved example 1
u= Queue ( Q )
v=adj. to u Nodes Visited T ( u, v)
del(Q)
- - A A -
A B, C, D, E A, B, C, D, E B, C, D, E A-B, A-C, A-D, A-E
B A, F A, B, C, D, E, F C, D, E, F B-F
C A, G A, B, C, D, E, F, G D, E, F, G C-G
D A, F A, B, C, D, E, F, G E, F, G D-F
E A, G A, B, C, D, E, F, G F, G E-G
F B, D A, B, C, D, E, F, G G -
G C, E A, B, C, D, E, F, G empty -
Solved example 2

u= Queue (
v=adj. to Nodes T ( u, v)
del( Q)
u Visited
Q)
- - A A -
A B, C, D A, B, C, D B, C, D A-B, A-C, A-D

B A, E, F A, B, C, D, E, C, D, E, F B – E, B – F
F
C A, F A, B, C, D, E, D, E, F C-F
F
D A A, B, C, D, E, E, F -
F
Solved Example 3

u= Queue (
v=adj. to Nodes T ( u, v)
del( Q)
u Visited
Q)
- - 0 0 -
0 1, 2, 3 0, 1, 2, 3 1, 2, 3 0 -1, 1 – 2, 0 - 3

1 0 0, 1, 2, 3 2, 3 -
2 0,4 0, 1, 2, 3, 4 3, 4 2–4
3 0 0, 1, 2, 3, 4 4 -
4 2 0, 1, 2, 3, 4 Empty -
Solved Example 4
u= Queue (
v=adj. to Nodes Visited T ( u, v)
del( Q)
u
Q)
- - V V -
V V1, V2 V, V1, V2 V1, V2 V – V1, V – V2

V1 V, V2, V, V1, V2, V3, V2, V3, V5 V1 – V3, V1 – V5,


V3, V5 V5 V1-V2
V2 V, V1, V, V1, V2, V3, V3, V5, V4 V2-V3, V2 – V4
V3, V4 V4, V5
V3 V1, V2, V, V1, V2, V3, V5, V4 V3-V4, V3-V5
V4, V5 V4, V5
V5 V1, V3 V, V1, V2, V3, V4 -
V4, V5
V4 V2, V3 V, V1, V2, V3, Empty -
V4, V5
Solved Example 5

u= Queue (
v=adj. to Nodes T ( u, v)
del( Q)
u Visited
Q)
- - A A -
A B, C A, B, C B, C A – B, A – C

B A, D, E A, B, C, D, E C, D, E B – D, B – E
C A, F, G A, B, C, D, E, D, E, F, G C–F
F, G Goal node is
reached
Practice Problems
Algorithm performance
T ime complexity: O(|V|+|E|)
▪ The number of vertices / Nodes in the graph is |V|
▪ The edges are |E|

S pace complexity: O(|V|)


▪ Space is measuredin terms of themaximum number of nodes
stored in memory.
Depth First Search
• Depth-first search is an Un-informed search and a recursive algorithm.
• In recursive algorithm the output of the previous step becomes the input
in the next step.
• Depth-first search always expands the deepest node
• The search proceeds immediately to the deepest level of the search tree,
where the nodes have no successors
• The depth-first search (DFS) algorithm starts with the initial node of
graph G and goes deeper until we find the goal node or the node with no
children.
• Because of the recursive nature, stack data structure can be used to
implement the DFS algorithm. The process of implementing the DFS is
similar to the BFS algorithm.
Pictorial
representation of
the DFS working
Algorithm
The step by step process to implement the DFS traversal is given as
follows
• First, create a stack with the total number of vertices in the graph.
• Now, choose any vertex as the starting point of traversal, and push
that vertex into the stack.
• After that, push a non-visited vertex (adjacent to the vertex on the
top of the stack) to the top of the stack.
• Now, repeat steps 3 and 4 until no vertices are left to visit from the
vertex on the stack's top.
• If no vertex is left, go back and pop a vertex from the stack.
• Repeat steps 2, 3, and 4 until the stack is empty.
Stack Pop(stack)
v=adj( Nodes Visited S Output
s[top])
- - A - -
A B A, B - A–B
A, B D A, B, D - B–D
A, B, D F A, B, D, F - D–F Solved example 1
A, B, D, F - A, B, D, F F -
A, B, D - A, B, D, F D - Final result
A–B–D–F–C–G-E
A, B - A, B, D, F B -
A C A, B, D, F, C - A–C
A, C G A, B, D, F, C, G - C–G
A, C, G E A, B, D, F, C, G, E - G–E
A, C, G, E - A, B, D, F, C, G, E E -
A, C, G - A, B, D, F, C, G, E G -
A, C - A, B, D, F, C, G, E C -
A - A, B, D, F, C, G, E A -
Solved example 2
Stack V=adj(S[top]) Nodes visited S Pop (stack) Output
- - 0 - -
0 1 0, 1 - 0–1
0, 1 2 0, 1, 2 - 1–2
0, 1, 2 4 0, 1, 2, 4 - 2–4
0, 1, 2, 4 - 0, 1, 2, 4 4 -
Final result
0, 1, 2 - 0, 1, 2, 4 2 - 0–1–2–4-3
0, 1 - 0, 1, 2, 4 1 -
0 3 0, 1, 2, 4, 3 - 0–3
0, 3 - 0, 1, 2, 4, 3 3 -
0 - 0, 1, 2, 4, 3 0 -
Solved example 3
Stack V=adj(s[top]) Nodes visited S Pop (stack) Output
- - A - -
A B A, B - A–B
A, B E A, B, E - B–E
A, B, E - A, B, E E - Final result
A, B F A, B, E, F - B–F A–B–E–F–C–D
A, B, F C A, B, E, F, C - F–C
A, B, F, C - A, B, E, F, C C -
A, B, F - A, B, E, F, C F -
A, B - A, B, E, F, C B -
A D A, B, E, F, C, D - A–D
A, D - A, B, E, F, C, D D -
A - A, B, E, F, C, D A -
Solved example 4

Stack V=adj(s[top]) Nodes visited S Pop (stack) Output


- - A - -
Final result
A B A, B - A–B A–B–C–D–E
A, B C A, B, C - B–C
A, B, C D A, B, C, D - C–D
A, B, C, D E A, B, C, D, E - D–E
A, B, C, D, E - A, B, C, D, E E -
A, B, C, D - A, B, C, D, E D -
A, B, C - A, B, C, D, E C -
A, B - A, B, C, D, E B -
A - A, B, C, D, E A -
Stack V=adj(s[top]) Nodes visited S Pop (stack) Output
- - a - -
Solved example 5
a c a, c - a–c
a, c d a, c, d - c–d
a, c, d - a, c, d d -
a, c f a, c, d, f - c–f
a, c, f b a, c, d, f, b - f–b
a, c, f, b e a, c, d, f, b, e - b-e
a, c, f, b, e - a, c, d, f, b, e e -
a, c, f, b - a, c, d, f, b, e b - g, h, i - g, h, i, j i -
a, c, f - a, c, d, f, b, e f - g, h - g, h, i, j h -
a, c - a, c, d, f, b, e c - g - g, h, i, j g -
a - a, c, d, f, b, e a -
- - g - -
g h g, h - g-h
g, h i g, h, i - h–i
g, h, i j g, h, i, j - i–j
g, h, i, j - g, h, i, j j -
Algorithm performance
T ime complexity: O(|V|+|E|)
▪ The number of vertices / Nodes in the graph is |V|
▪ The edges are |E|

S pace complexity: O(|V|)


▪ Depth-first tree search needs to store only a single path from the
root to a leaf node
▪ Once a node has been expanded, it can be removed from memory
as soon as all its descendants have been fully explored
▪ Depth-first search requires less storage
Uniform cost search algorithm
This algorithm uses the backtracking approach.
It is used for the weighted Tree / graph traversal.
Goal is to find the path with lowest cumulative cost – Optimal
path.
Node expansion is always based on the path costs.
Priority queue is used for the implementation.

Advantages: Optimal Solution.


Disadvantages: This algorithm will get stuck in the infinite loop.
Solved Example
Let us assume the goal state is G
(Steps Of solution shown in the
next slide)

A -> C - > B - > E - > G = 3 + 2 + 6 + 1 = 12


Steps of solution

Node Open List


{A(0)}
A {C(3), B(7) }
C { B(2), D(9)}
B { E(6)}
E { G(1), F(2), D(3)}
G - (GOAL REACHED)

Path : A – C – B – E – G
Cost = 3 + 2 + 6 + 1 = 12
Q2. Goal changed from G to H (Steps to be
followed, same as previous one)

Let us assume the goal state is H


Informed search algorithms
Informed search Algorithms: Best First Search, A*, AO*,
Hill Climbing, Generate & Test, Alpha-Beta pruning,
Min-max search.
Introduction
In uninformed search algorithms, agent looks the entire search space for all possible
solutions of the problem without having any additional knowledge about search
space. Because of this it takes time to get the solution.
But informed search algorithm contains an knowledge such as how far we are from
the goal, path cost, how to reach to goal node, etc. This knowledge help agents to
explore less to the search space and find more efficiently the goal node.
Informed search algorithm uses the idea of heuristic, so it is also called Heuristic
search.
Heuristics function: Heuristic is a function which is used in Informed Search, and
it finds the most promising path. It takes the current state of the agent as its input
and produces the estimation of how close agent is from the goal. The heuristic
method, however, might not always give the best solution, but it guaranteed to find a
good solution in reasonable time. Heuristic function estimates how close a state is to
the goal.
In the informed search:
✔ Best First Search,
✔ A*,
✔ AO*,
✔ Hill Climbing,
✔ Generate & Test,
✔ Alpha-Beta pruning,
✔ Min-max search.
Pure Heuristic Search:
Pure heuristic search is the simplest form of heuristic search
algorithms.
It expands nodes based on their heuristic value.
It maintains two lists, OPEN and CLOSED list.
In the CLOSED list, it places those nodes which have already
expanded and
In the OPEN list, it places nodes which have yet not been
expanded.
Best-first Search Algorithm (Greedy
Search)
Greedy best-first search algorithm always selects the path which
appears best at that moment.
It is the combination of depth-first search and breadth-first search
algorithms.
It uses the heuristic function and search. Best-first search allows us
to take the advantages of both algorithms.
With the help of best-first search, at each step, we can choose the
most promising node. In the best first search algorithm, we expand
the node which is closest to the goal node and the closest cost is
estimated by heuristic function.
3)

node h(n) node h(n) node h(n)


A 11 E 4 I,J 3
B 5 F 2 S 15
C,D 9 H 7 G 0
4)
A* Search Algorithm
• A* Search Algorithm is a simple and efficient search algorithm that can be
used to find the optimal path between two nodes in a graph.

• It is used for the shortest path finding.

• It is an extension of Dijkstra’s shortest path algorithm (Dijkstra’s


Algorithm).

• It is the sum of two variables’ values that determines the node it picks at
any point in time.

• At each step, it picks the node with the smallest value of ‘f’ (the sum of ‘g’
and ‘h’) and processes that node/cell.
• ‘g’ is the distance it takes to get to a certain square on the grid from the
starting point, following the path we generated to get there.

• ‘h’ is the heuristic, which is the estimation of the distance it takes to get to
the finish line from that square on the grid.

• Steps:
1. Add start node to list.
2. For all the neighbouring nodes, find the least cost F node.
3. Switch to the closed list. For 8 nodes adjacent to the current node. If the node is not
reachable, ignore it. Else. ...
4. Stop working when you find the destination. It is not possible to find the destination going
through all possible points.
9)
AO* Search Algorithm
• AO* algorithm is a best first search algorithm.

• AO* algorithm uses the concept of AND-OR


graphs to decompose any complex problem
given into smaller set of problems which are
further solved.

• AND-OR graphs are specialized graphs that are


used in problems that can be broken down into
sub problems where AND side of the graph
represent a set of task that need to be done to
achieve the main goal, whereas the OR side of
the graph represent the different ways of
performing task to achieve the same main goal.
Working of AO algorithm:
The AO* algorithm works on the formula given below :
f(n) = g(n) + h(n)
where,
•g(n): The actual cost of traversal from initial state to the current state.
•h(n): The estimated cost of traversal from the current state to the goal state.
•f(n): The actual cost of traversal from the initial state to the goal state.
Hill Climbing Algorithm
• Hill climbing algorithm is a local search algorithm which
continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best
solution to the problem.
• It terminates when it reaches a peak value where no
neighbor has a higher value.
• It is also called greedy local search as it only looks to its
good immediate neighbor state and not beyond that.
• A node of hill climbing algorithm has two components which
are state and value.
• In this algorithm, we don't need to maintain and handle the
search tree or graph as it only keeps a single current state.
Features of Hill Climbing:
• Generate and Test variant: Hill Climbing is the variant of
Generate and Test method. The Generate and Test method
produce feedback which helps to decide which direction to
move in the search space.
• Greedy approach: Hill-climbing algorithm search moves in
the direction which optimizes the cost.
• No backtracking: It does not backtrack the search space,
as it does not remember the previous states.
State-space Diagram for Hill Climbing:

• The state-space landscape is a graphical representation of


the hill-climbing algorithm which is showing a graph
between various states of algorithm and Objective
function/Cost.
• On Y-axis we have taken the function which can be an
objective function or cost function, and state-space on the
x-axis.
• If the function on Y-axis is cost then, the goal of search is to
find the global minimum and local minimum.
• If the function of Y-axis is Objective function, then the goal
of the search is to find the global maximum and local
maximum.
Different regions:
• Local Maximum: Local maximum is a
state which is better than its
neighbor states, but there is also
another state which is higher than it.
• Global Maximum: Global maximum
is the best possible state of state
space landscape. It has the highest
value of objective function.
• Current state: It is a state in a
landscape diagram where an agent
is currently present.
• Flat local maximum: It is a flat space
in the landscape where all the
neighbor states of current states
have the same value.
• Shoulder: It is a plateau region
which has an uphill edge.
Problems in Hill Climbing Algorithm:
• 1. Local Maximum: A local maximum is a
peak state in the landscape which is
better than each of its neighboring states,
but there is another state also present
which is higher than the local maximum.
• 2. Plateau: A plateau is the flat area of the
search space in which all the neighbor
states of the current state contains the
same value, because of this algorithm
does not find any best direction to move.
A hill-climbing search might be lost in the
plateau area.
• 3. Ridges: A ridge is a special form of the
local maximum. It has an area which is
higher than its surrounding areas, but
itself has a slope, and cannot be reached
in a single move.
Generate and Test Search Algorithm
• Generate and Test Search is a heuristic search technique based
on Depth First Search with Backtracking which guarantees to
find a solution if done systematically and there exists a solution.
• In this technique, all the solutions are generated and tested for
the best solution.
• It ensures that the best solution is checked against all possible
generated solutions.
• The evaluation is carried out by the heuristic function.
Algorithm steps:

1. Generate a possible solution. For


example, generating a particular point
in the problem space or generating a
path for a start state.
2. Test to see if this is a actual solution
by comparing the chosen point or the
endpoint of the chosen path to the set
of acceptable goal states
3. If a solution is found, quit. Otherwise
go to Step 1.
Properties of Good Generators:
• The good generators need to have the following properties:
❑ Complete: Good Generators need to be complete i.e. they should
generate all the possible solutions and cover all the possible states.
In this way, we can guaranty our algorithm to converge to the correct
solution at some point in time.
❑ Non Redundant: Good Generators should not yield a duplicate
solution at any point of time as it reduces the efficiency of algorithm
thereby increasing the time of search and making the time complexity
exponential.
❑ Informed: Good Generators have the knowledge about the search
space which they maintain in the form of an array of knowledge. This
can be used to search how far the agent is from the goal, calculate
the path cost and even find a way to reach the goal.
Mini-Max Algorithm
• Mini-max algorithm is a recursive or backtracking algorithm which
is used in decision-making and game theory. It provides an optimal
move for the player assuming that opponent is also playing
optimally.
• Min-Max algorithm is mostly used for game playing in AI such as
Chess, Checkers, tic-tac-toe, go, and various tow-players game.
This Algorithm computes the minimax decision for the current
state.
• In this algorithm two players play the game, one is called MAX and
other is called MIN.
• Both the players fight it as the opponent player gets the minimum
benefit while they get the maximum benefit.
• Both Players of the game are opponent of each other, where MAX
will select the maximized value and MIN will select the minimized
value.
• The minimax algorithm performs a depth-first search algorithm for
• The steps for the mini max algorithm can be stated as follows:
1. Create the entire game tree.
2. Evaluate the scores for the leaf nodes based on the evaluation
function.
3. Backtrack from the leaf to the root nodes:
• For Maximizer, choose the node with the maximum score.
• For Minimizer, choose the node with the minimum score.
4. At the root node, choose the node with the maximum value and
select the respective move.
Alpha-Beta Pruning
• Alpha-beta pruning is a modified version of the minimax
algorithm.
• It is an optimization technique for the minimax algorithm.
• This is a technique by which without checking each node of
the game tree we can compute the correct minimax
decision, and this technique is called pruning.
• This involves two threshold parameter Alpha and beta for
future expansion, so it is called alpha-beta pruning. It is
also called as Alpha-Beta Algorithm.
• Alpha-beta pruning can be applied at any depth of a tree,
and sometimes it not only prune the tree leaves but also
entire sub-tree.
• The condition which is required for alpha-beta pruning is:
• The two-parameter can be defined as:
❑ Alpha: The best (highest-value) choice we have found so far at
any point along the path of Maximizer. The initial value of alpha is -
∞.
❑ Beta: The best (lowest-value) choice we have found so far at any
point along the path of Minimizer. The initial value of beta is +∞.
• The Alpha-beta pruning returns the same move as the
standard minimax algorithm does, but it removes all the
nodes which are not really affecting the final decision but
making algorithm slow. Hence by pruning these nodes, it
makes the algorithm fast.
Thank you

You might also like