COSC 311:Principles of Artificial Intelligence
Ali Ahmad Aminu, PhD
Computer Science Department, Gombe State University
2024/2025 Session
Week 2: Search
Week 2: Outline
• Search
• Problem Solving by Search
• Search Strategies
• Uniformed Search
• Informed Search
Week 2: Introduction to Search
• What is Search?
• Search is a class of techniques for systematically finding or constructing
solutions to problems.
• Search is a fundamental technique in AI used to explore possible solutions to
a problem by systematically examining different states (solutions)
• In AI, search algorithms are used when:
• The solution is not known in advance but can be discovered by navigating
through different states or configurations
• The problem can be broken down into states and transitions.
• The goal is to find an optimal or feasible sequence of actions leading from an
initial state to a desired goal state.
Week 2: Why is search important in
AI?
• Many real-world and AI problems can be modeled as search problems, including:
Problem Type Example Applications
Path Planning Robot navigation, GPS route optimization
Puzzles 8-Puzzle, Sudoku, Rubik’s Cube
Games Chess, Tic-Tac-Toe, Poker (adversarial search)
Scheduling Job-shop scheduling, airline crew assignments
Optimization Traveling Salesman Problem (TSP), resource allocation
Week 2: Problem Solving by
Search
• A search problem is defined by five key components:
• Initial State (s₀)
• The starting point of the search (e.g., a scrambled 8-puzzle).
• Actions (Actions(a))
• The set of possible moves from a given state (e.g., moving the blank tile in the 8-puzzle).
• Transition Model (Result(s, a))
• Describes the state resulting from taking action *a* in state *s*.
• Defines the state space (all reachable states from s₀ via actions).
• Goal Test (Goal(s))
• Determines whether a given state is a solution (e.g., tiles in correct order for the 8-puzzle).
• Path Cost (c(s, a, s’))
• Assigns a cost to each action (often used in optimization, e.g., number of moves in the 8-
puzzle).
Week 2: Problem Solving by
Search
• State Space representation
• The state space forms a graph: (V, E) where:
V is a set of nodes (states)
E is a set of edges (actions)
Each edge is directed from one node to another node
• Nodes corresponds to States
• Edges corresponds to applicable Actions
• A path in the state space is a sequence of states connected by a
sequence of actions.
• A solution is a path from the initial state to a goal state.
Week 2: Problem Solving by
Search - Example
• Initial state
• The 8-Puzzle • A random arrangement of the titles e.g.
• Problem Statement: Rearrange tiles from a state shown on left of figure 1
scrambled initial state to a goal state. • Actions/Operations
• Moving the blank up, down, left, right.
Can every action be performed in every
state?
• Transition Model
• New state after moving the blank (e.g.,
moving _ up swaps it with the tile above).
• Goal Test
• be in a state where the tiles are all in
• Figure 1: 8 Puzzle initial and goal state the positions shown on the right of
figure 1
• Rule: can slide a tile into the blank spot or move • Path Cost: Number of moves taken, optimal
the blank spot around solution minimizes total moves
Week 2: Problem Solving by Search -
Example
• The Missionaries and Cannibals Problem
• Problem Statement: 3 missionaries and 3 cannibals must cross a river using a
boat that holds up to 2 people. If cannibals ever outnumber missionaries on
either shore, the missionaries are eaten.
Component Description
Initial State (Left: 3M, 3C, Boat; Right: 0M, 0C)
Actions Boat crossings with 1-2 people (e.g., "Move 1M + 1C
to right").
Transition Model Update counts on both shores after each valid move.
Goal Test (Left: 0M, 0C; Right: 3M, 3C, Boat)
Path Cost Each crossing = 1 (minimize total trips).
Week 2: Problem Solving by
Search
• Search Process
• Exploration: The agent generates a search tree (all possible action sequences).
• Evaluation: Selects paths likely to lead to the goal (using strategies like BFS, DFS, A*).
• Termination: Stops when a goal state is reached or all possibilities are exhausted.
Week 1: Evaluating Search
Strategies
• Completeness: Will the search always find a solution if a
solution exists?
• Time Complexity: How long does it take to find a solution?
Usually measured in terms of the number of nodes expanded
• Space Complexity: How much space is used by the
algorithm? Usually measured in terms of the maximum
number of nodes that can be stored
• Optimality: Will the search always find the least cost
solution? (when actions have cost)
Week 2: Search
Strategies/Algorithms
• Uninformed Search (aka Blind Search)
• Adopt a fixed rule for selecting the next state/node to be
expanded
• Uses no information about the problem domain to guide the
search
• Uninformed Search Algorithms
• Breadth First Search (BFS)
• Depth First Search (DFS)
Week 2: Breadth First Search
Algorithm
• Algorithm outline
• Always choose the node from the list (OPEN) that is at the smallest
depth (closest to the starting point) and expand it. Add any newly
created nodes to the list (OPEN).
• The list (OPEN) works like a queue, following a "first in, first out"
(FIFO) order.
• Terminate when the chosen node for expansion is the goal.
Week 2: Breadth-first search
• Expand shallowest unexpanded node
• List/Open (or fringe): nodes in queue to be explored
• List is a first-in-first-out (FIFO) queue, i.e., new successors go at end of
the queue.
• Goal-Test when inserted. Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black
Initial state = A nodes
Is A a goal state?
Put A at end of queue.
frontier = [A]
14
Week 2: Breadth-first search
• Expand shallowest unexpanded node
• List/Frontier is a FIFO queue, i.e., new successors go at end
Expand A to B, C.
Is B or C a goal state?
Put B, C at end of queue.
frontier = [B,C]
15
Week 2: Breadth-first search
• Expand shallowest unexpanded node
• Frontier is a FIFO queue, i.e., new successors go at end
Expand B to D, E
Is D or E a goal state?
Put D, E at end of queue
frontier=[C,D,E]
16
Week 2: Breadth-first search
• Expand shallowest unexpanded node
• Frontier is a FIFO queue, i.e., new successors go at end
Expand C to F, G.
Is F or G a goal state?
Put F, G at end of queue.
frontier = [D,E,F,G]
17
Week 2: Breadth-first search
• Expand shallowest unexpanded node
• Frontier is a FIFO queue, i.e., new successors go at end
Expand D to no children.
Forget D.
frontier = [E,F,G]
18
Week 2: Breadth-first search
• Expand shallowest unexpanded node
• Frontier is a FIFO queue, i.e., new successors go at end
Expand E to no children.
Forget B,E.
frontier = [F,G]
19
Week 2:Breadth First Search
Algorithm
• Properties of BFS
• Complete
• Will always finds a solution if one exists
• Optimal
• if all operations have the same cost. Otherwise not optimal but finds solution
with shortest path length
• Exponential time and space complexity
• O(bd) nodes will be generated, where d is the depth of the solution and b is
the branching factor (number of children at each node)
Week 2: Depth First Search
Algorithm
• Algorithm outline
• Always select from the OPEN the node with the greatest depth for expansion
and put all newly generated nodes in OPEN
• OPEN is organized as LIFO list, i.e. a stack
• Terminate if a node selected for expansion is a goal
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = Last In First Out (LIFO) queue, i.e., new successors go at the front of the
queue.
Future= green dotted circles
• Goal-Test when inserted. Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black
Initial state = A nodes
Is A a goal state?
Put A at front of queue.
frontier = [A]
22
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = LIFO queue, i.e., put successors at front
Future= green dotted circles
Expand A to B, C. Frontier=white nodes
Is B or C a goal state? Expanded/active=gray nodes
Forgotten/reclaimed= black
nodes
Put B, C at front of queue.
frontier = [B,C]
23
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = LIFO queue, i.e., put successors at front
Expand B to D, E. Future= green dotted circles
Frontier=white nodes
Is D or E a goal state?
Expanded/active=gray nodes
Forgotten/reclaimed= black
Put D, E at front of queue. nodes
frontier = [D,E,C]
24
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = LIFO queue, i.e., put successors at front
Expand D to H, I. Future= green dotted circles
Is H or I a goal state? Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black
Put H, I at front of queue. nodes
frontier = [H,I,E,C]
25
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = LIFO queue, i.e., put successors at front
Future= green dotted circles
Expand H to no children. Frontier=white nodes
Expanded/active=gray nodes
Forget H. Forgotten/reclaimed= black
nodes
frontier = [I,E,C]
26
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = LIFO queue, i.e., put successors at front
Future= green dotted circles
Expand I to no children. Frontier=white nodes
Forget D, I. Expanded/active=gray nodes
Forgotten/reclaimed= black
nodes
frontier = [E,C]
27
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = LIFO queue, i.e., put successors at front
Expand E to J, K. Future= green dotted circles
Frontier=white nodes
Is J or K a goal state? Expanded/active=gray nodes
Forgotten/reclaimed= black
Put J, K at front of queue. nodes
frontier = [J,K,C]
28
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = LIFO queue, i.e., put successors at front
Expand I to no children. Future= green dotted circles
Frontier=white nodes
Forget D, I. Expanded/active=gray nodes
Forgotten/reclaimed= black
frontier = [E,C] nodes
29
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = LIFO queue, i.e., put successors at front
Future= green dotted circles
Expand K to no children. Frontier=white nodes
Forget B, E, K. Expanded/active=gray nodes
Forgotten/reclaimed= black
nodes
frontier = [C]
30
Week 2: Depth-first search
• Expand deepest unexpanded node
• Frontier = LIFO queue, i.e., put successors at front
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Expand C to F, G. Forgotten/reclaimed= black
Is F or G a goal state? nodes
Put F, G at front of queue.
frontier = [F,G]
31
Week 2: Depth-first search
• Properties of DFS
• Not Complete
• Not guaranteed to a finds a solution if one exists
• Not Optimal
• May not terminate without a depth bond (i.e. cutting off the search below a fixed
depth)
• Exponential time complexity
• O(bd) nodes will be generated, where d is the depth of the solution and b is the
branching factor (number of children at each node)
• Linear Space Complexity
• O(bd) nodes will be stored in the memory
Examples
• Uninformed Search Examples
Week 2: Informed Search Algorithms
• Unlike the uninformed search, the informed search algorithm:
• Have all the necessary information needed to make a decision on
which path to take
• uses additional knowledge about the problem domain to select the
best path
• Heuristics are used, which are informed guesses
• A heuristics is a function that estimate the cost of reaching a goal from a given
state (node)
Week 2: Best First Search Algorithm
• Best First Search
• General form of Informed Search Algorithms
• Node is selected for expansion based on evaluation function f(n)
• Sort nodes in the OPEN list by increasing values of an evaluation function,
f(n), that incorporates domain-specific information
• Expand the lowest cost node first
Week 2: Best First Search Algorithm
• Often for best search algorithm, f(n) is defined in terms of a heuristic function
h(n)
• h(n) = estimated cost of the cheapest path from the state at node n to a goal
state (for goal state h(n) = 0)
• Heuristics are the common way in which additional knowledge is passed to
the search algorithm
• Define a heuristic function, h(n)
• If h(n1) < h(n2), we guess it is cheaper to reach the goal from n1, than n2.
• We require h(n) = 0, for a goal state
Week 2: Greedy Best-First Search
• Greedy Best-First Search
• Use an evaluation function, f(n) = h(n), to rank nodes in the
OPEN by increasing values of f
• Selects the node to expand that is believed to be closest (i.e.,
smallest f value) to a goal node
• Greedy best first search greedily tries to achieve a low cost
solution
• Greedy best search ignores the cost of getting to n, so it may
lead astray exploring nodes that cost a lot but seem to be
close to the goal.
Week 2: Greedy Best-First Search
• Properties of GBFS
• Complete: NO
• Optimal: NO
• Time and space complexity: O(bd) (i.e., exponential)
Week 2: A* Search Algorithm
• A* Search Algorithm
• Best form of Best Search Algorithm
• Takes into account the cost of getting to a node as well as the estimated cost
of getting to the goal from a given node
• Define an evaluation function f(n)
• f(n) = g(n) + h(n)
• g(n) = actual cost to get to node n from start
• h(n) = estimated cost to get to a goal from node n
• Always expand the node with the lowest f-value on OPEN
• The f-value, f(n) is an estimation of the cost of getting to the goal via the node
(path) n.
Week 2: A* Search Algorithm
• Properties of A* Search
• Complete: Yes
• Optimal: Yes
• Time and space complexity: O(bd) (i.e., exponential)
Week 2: A* Search Algorithm
• Admissible heuristics
• A heuristics h(n) is admissible if for every node n,
• h(n) <= h*(n)
• where h*(n) is the true cost to reach the goal state from node n.
• An admissible heuristics never overestimate the cost to reach the goal
• Informed Search Examples
Week2: Review Questions
Week 2: Mini project
• Problem-Solving Techniques
• Implement BFS and DFS to solve a maze problem (code + explanation).
• Compare their performance in terms of time and space complexity.