TABLE OF CONTENT
1. Introduction
2. Understanding Problem Solving in AI
3. Problem Formulation
4. Types of Problems in AI
5. Search Strategies Overview
6. Why Are Search Strategies Important?
7. Uninformed Search Strategies
8. Informed (Heuristic) Search Strategies
9. Local Search and Optimization Problems
10. Adversarial Search and Game Playing
11. Constraint Satisfaction Problems
12. Real-World Applications
13. Challenges and Limitations
14. Conclusion
15. References
INTRODUCTION:
Artificial Intelligence (AI) aims to build intelligent agents that can perform
tasks which typically require human intelligence. One of the core areas in AI is
problem solving. At its core, problem solving in AI is about finding sequences
of actions that lead from an initial state to a desired goal state. To do this
effectively, AI systems often rely on search strategies.
At its essence, problem-solving in AI involves identifying a desired goal and
determining a sequence of actions to achieve that goal from a given starting
point. This process often requires exploring various possibilities, evaluating
potential outcomes, and selecting the most effective path forward. To facilitate
this, AI systems employ search strategies—systematic methods for navigating
through the space of possible solutions.
Understanding and implementing effective problem-solving and search
strategies are crucial for developing intelligent agents capable of operating in
dynamic and uncertain environments. This document explores the foundational
concepts, methodologies, and applications of problem-solving and search
strategies in AI.
UNDERSTANDING PROBLEM SOLVING IN AI:
Problem solving involves modelling real-world issues into structured forms that
a machine can understand and solve. This process usually includes identifying
an initial state, a goal state, and a set of rules or actions that can change the
state. The AI system then explores possible paths from the initial to the goal
state using various search strategies.
For instance, consider the task of navigating a robot from one location to
another in a maze. The initial state is the robot's starting position, the goal state
is the target location, and the actions are the possible movements (e.g., move
forward, turn left/right). The AI system must determine a sequence of actions
that leads from the initial state to the goal state while avoiding obstacles.
EFFECTIVE PROBLEM-SOLVING REQUIRES THE AI SYSTEM TO
Represent the problem accurately
Explore possible solutions
Evaluate alternatives
Select the optimal solution
PROBLEM FORMULATION
Before any AI system can solve a problem, it must first understand what the
problem is. That’s where problem formulation comes in.
Think of problem formulation as defining the rules of the game: you need to
know where the game begins, what moves are allowed, what the goal is, and
how to keep track of progress.
In AI, problem formulation is the process of translating a real-world scenario
into a format that an AI system can analyze and solve. This means clearly
defining the environment, objectives, and constraints in a way that allows for
systematic search and decision-making.
There are five essential components that must be clearly defined when
formulating a problem:
1. Initial State
This is where the problem begins—the current situation.
In a GPS navigation system, this could be your current location.
In a puzzle game, this is the current arrangement of the puzzle pieces.
2. Actions (Possible Operators)
These are all the things the agent (AI system) can do from a given state.
For example, in a chess game, this includes all legal moves the player can
make.
These are the steps or decisions available at each point.
3. Transition Model
This defines the result of taking a specific action in a specific state.
In other words, if the AI takes an action, where does it end up?
It’s like a rulebook that tells the system what each action does.
4. Goal Test
How do you know the problem is solved?
This function checks whether a given state satisfies the goal conditions.
For example, has the AI reached the target location? Has the puzzle been
solved?
5. Path Cost
This defines the cost of getting from one state to another.
The cost could be time, money, distance, energy, or any other measurable
factor.
For example, the shortest path in a map may be based on the least
distance or traffic time.
Example:
Let’s say you’re building an AI to navigate a maze.
Initial State: The entrance of the maze.
Actions: Move up, down, left, right (if not blocked by walls).
Transition Model: If you move right from point A, you end up at point
B.
Goal Test: Are you at the maze's exit?
Path Cost: Each step costs 1 unit, or it could vary depending on
obstacles.
Why Problem Formulation is Important
1. Efficiency: A well-formulated problem helps the AI avoid wasting time
on irrelevant paths.
2. Correctness: It ensures the AI actually solves the right problem.
3. Scalability: Helps in designing AI that can handle complex and large-
scale problems.
4. Reusability: A clearly defined problem can be solved using general AI
search algorithms.
Types of Problem Formulations
Different problems require different formulations:
Deterministic vs. Nondeterministic: Are outcomes predictable?
Fully Observable vs. Partially Observable: Does the agent have all the
info?
Single-Agent vs. Multi-Agent: Is the AI alone or competing/cooperating
with others?
Static vs. Dynamic Environment: Can the environment change during
solving?
Challenges in Problem Formulation
Ambiguity in Real Life: Real-world scenarios aren’t always clearly
defined.
Over-simplification: Simplifying too much may make the solution
unrealistic.
Computational Complexity: Poor formulation can lead to inefficient or
unsolvable models.
Changing Environments: The AI must sometimes reformulate the
problem as new information appears.
TYPES OF PROBLEMS IN AI
Problem-solving in AI isn't one-size-fits-all. The nature of the problem greatly
affects how an AI system tackles it. By understanding the types of problems, we
can choose the most suitable solving strategy. Here's an expanded explanation
of the key classifications:
1. Single-State vs. Multiple-State:
Single-State Problems: Deterministic and predictable. Example:
Solving a jigsaw puzzle.
Multiple-State Problems: Non-deterministic with uncertain outcomes.
Example: Navigating with obstacles or wind.
2. Fully Observable vs. Partially Observable:
Fully Observable Problems: Complete visibility of the environment.
Example: Chess.
Partially Observable Problems: Limited visibility. Example: Robot in a
maze with limited sensors.
3. Static vs. Dynamic:
Static Problems: Environment does not change during solving.
Example: Crossword puzzle.
Dynamic Problems: Environment changes during solving. Example:
Self-driving cars.
4. Discrete vs. Continuous:
Discrete Problems: Limited, countable states and actions. Example:
Tic-tac-toe.
Continuous Problems: Infinite states and actions. Example: Flying a
drone.
SEARCH STRATEGIES OVERVIEW
When it comes to solving problems in Artificial Intelligence, the concept of
“search” is at the heart of the process. Think of it like navigating a maze: you’re
trying to get from the entrance to the exit by exploring various paths. Similarly,
AI systems try to go from an initial problem state to a goal state by exploring
different possible actions.
To do this effectively, AI uses search strategies, which are methods or rules for
exploring possible paths to a solution. These strategies decide which paths to
explore, in what order, and how to evaluate the options available at each
step. The ultimate goal is to find the most efficient path from the problem to
the solution.
Search strategies are categorized into two main types:
Uninformed (Blind) Search
Informed (Heuristic) Search
UNINFORMED (BLIND) SEARCH STRATEGIES:
These are basic strategies that do not have any extra information about the
problem domain. They do not "know" where the goal is - they just explore the
search space in a systematic way.
Imagine you’re in a dark room searching for your keys. You don’t know where
they are, so you start looking around randomly or in a specific order.
These strategies search the state space blindly:
1. Breadth-First Search (BFS): Explores all nodes at one depth before going
deeper.
2. Depth-First Search (DFS): Explores as deep as possible before
backtracking.
3. Uniform Cost Search: Explores the least-cost path first.
4. Depth-Limited Search: DFS with a limit on depth.
5. Iterative Deepening DFS: Combines DFS and BFS advantages.
Characteristics Uniformed Search Strategies:
No domain-specific knowledge.
Systematic, but can be inefficient.
Guarantees finding a solution (if one exists), but not necessarily the best
one.
INFORMED (HEURISTICS) SEARCH STRATEGIES:
These strategies use additional information (called heuristics) to make
smarter decisions about which paths to follow. A heuristic is like a clue or guess
that tells the AI, "This path looks more promising."
Back to our key-search analogy: now imagine you remember that you usually
leave your keys near the couch. That’s a heuristic! So, you check the couch first
instead of blindly searching every corner.
These methods use heuristics to guide the search:
1. Best-First Search: Selects the most promising node using a heuristic.
2. AI Search: Combines path cost and heuristic (f(n) = g(n) + h(n)).
3. Greedy Search: Selects the node that appears closest to the goal.
4. Memory-Bounded A* (e.g., IDA*): Trades space for time.
Characteristics of Informed Search Strategies:
Use problem-specific knowledge to improve efficiency.
Typically, faster and more effective than uninformed strategies.
May not always guarantee the optimal solution unless carefully designed
(e.g., with AI).
Why Are Search Strategies Important?
Search strategies determine:
How fast a solution can be found.
Whether the solution is optimal (i.e., best possible).
How much memory or processing power is used.
LOCAL SEARCH AND OPTIMIZATION PROBLEM
What is Local Search?
Local search is a problem-solving method in Artificial Intelligence that focuses
on finding optimal solutions by exploring the neighborhood of the current
solution, rather than searching through the entire state space.
Unlike traditional search methods (like BFS or DFS), which maintain a full path
from the initial state to the goal, local search keeps only a single current state
and moves to a neighboring one. It's especially useful when:
The path to the goal is not important, only the solution matters (e.g.,
optimization problems).
The search space is huge or infinite.
Memory is limited.
How It Works (Core Idea)
Imagine you’re climbing a mountain (looking for the highest point), but you can
only see your immediate surroundings. You check if any nearby spot is higher,
and if so, you move there. You repeat this until no neighbor is better than where
you are that’s local search.
Types of Local Search Algorithms
Some of the common local search techniques used in AI:
1. Hill Climbing
Basic Idea: Move to the neighbor with the highest value (closer to the
goal).
Advantage: Simple and uses little memory.
Disadvantage: Can get stuck in:
Local Maxima: A peak that is not the highest overall.
Plateaus: Flat areas with no clear direction.
Ridges: Sloped areas that require diagonal moves.
Variants:
Stochastic Hill Climbing: Picks a random uphill move.
First-Choice Hill Climbing: Evaluates neighbors randomly and picks
the first better one.
Random-Restart Hill Climbing: Tries multiple times from random
initial states.
2. Simulated Annealing
Inspired by metallurgy (slowly cooling metal to achieve a strong
structure).
How it works: Occasionally allows "bad" moves (to lower values) to
escape local maxima.
Uses a temperature value that decreases over time—higher temperature
= more randomness.
Advantage: Can find global optimum if cooled slowly enough.
Disadvantage: Slower and needs tuning (cooling schedule, etc.).
3. Genetic Algorithms (GAs)
Inspired by natural selection and genetics.
Maintains a population of candidate solutions.
Applies:
Selection: Choose the fittest individuals.
Crossover: Combine parts of two solutions.
Mutation: Randomly alter parts of a solution.
Used for: Complex problems like scheduling, design, or feature selection.
Advantage: Explores multiple areas of the search space in parallel.
Disadvantage: Requires tuning and can be computationally expensive.
4. Beam Search (and Stochastic Beam Search)
Keeps track of k best states (the "beam").
Expands only the successors of those states.
Stochastic version randomly selects based on probability, giving chance
to weaker states.
5. Local Beam Search
Starts with multiple states and expands all neighbors.
Picks the best k of these to continue.
Keeps the search from getting stuck in one area.
Advantages
Very memory efficient (especially Hill Climbing).
Works well for large and complex problems.
Simple to implement and often very fast.
Disadvantages
No guarantee of optimal solution.
Can stall at local optima.
ADVANCE SEARCH AND GAME PLAY
Adversarial search is used in competitive environments, where multiple agents
(players) are competing against each other. Unlike typical search problems
(where the AI only considers its own moves toward a goal), adversarial search
must anticipate the actions of an opponent who is trying to prevent the AI from
succeeding.
Think of games like Chess, Checkers, or Tic-Tac-Toe—each move you make is
influenced by what your opponent might do next.
In competitive environments (e.g., games):
WHY IT IS DIFFERENT FROM OTHER SEARCH:
Feature Traditional Search Adversarial Search
Number of
Usually one Two or more (opponents)
Agents
Goal Reach a specific state Win the game or maximize score
Environment is often Opponent introduces
Predictability
static unpredictability
Always (consider opponent’s
Strategy Required Not always
strategy)
Game Types in AI
1. Deterministic vs. Stochastic
Deterministic: No randomness involved (e.g., Chess).
Stochastic: Random elements (e.g., Backgammon with dice).
2. Perfect vs. Imperfect Information
Perfect Information: All players know the full game state (e.g.,
Checkers).
Imperfect Information: Some information is hidden (e.g., Poker).
3. Zero-Sum Games
One player’s gain is another’s loss. Most classic board games fall
into this category.
Key Algorithms in Adversarial Search
1. Minimax Algorithm
Minimax is the fundamental algorithm for decision-making in two-player,
zero-sum games.
MAX Player: Tries to get the maximum score.
MIN Player: Tries to minimize the score for MAX.
How It Works:
Traverse the game tree to the end (or until a set depth).
At terminal states, apply a utility function (win = +1, lose = -1, draw = 0).
Backtrack:
MAX chooses the move with the maximum value.
MIN chooses the move with the minimum value.
Limitation: Very slow for deep trees (like Chess) due to exponential growth of
possibilities.
2. Alpha-Beta Pruning
Alpha-beta pruning is an enhancement to Minimax that reduces the number of
nodes evaluated.
Alpha: Best value MAX can guarantee so far.
Beta: Best value MIN can guarantee so far.
Key Idea: If a move is worse than a previously examined move, stop
evaluating that branch (prune it).
Benefits:
Same result as Minimax.
Much faster—can explore twice as deep in the same amount of time.
3. Expectiminimax (for Stochastic Games)
Used when the game includes chance nodes (e.g., rolling a die).
Like Minimax, but adds chance nodes with expected value calculations.
Used in games like Backgammon or Monopoly.
Applications in Games
Game AI Strategy Used
Tic-Tac-Toe Minimax
Alpha-Beta, Deep Learning (e.g.,
Chess
AlphaZero)
Checkers Minimax + Heuristics
Monte Carlo Tree Search (MCTS),
Go
Deep RL
Poker Game theory, Probabilistic Models
Real-time Games State abstraction + Search Trees
CONSTRAINT SATISFACTION PROBLEM (CSP)
What is a Constraint Satisfaction Problem?
A Constraint Satisfaction Problem (CSP) is a mathematical problem defined by:
A set of variables.
A domain of values for each variable.
A set of constraints that restrict the combinations of values the variables
can take.
The objective is to assign values to variables such that all constraints are
satisfied.
Formal Definition
A CSP is defined by:
X = {X₁, X₂, ..., Xₙ} → a set of variables.
D = {D₁, D₂, ..., Dₙ} → a domain of possible values for each variable.
C = {C₁, C₂, ..., Cₖ} → a set of constraints that limit allowable
combinations.
The solution is a complete assignment of values to variables that satisfies all
constraints.
EXAMPLES OF CSPs
Problem Variables Domains Constraints
Each cell in the 9×9 Rows, columns, and 3×3
Sudoku {1–9}
grid boxes have unique digits
Neighboring regions
Regions (e.g., {Red, Green,
Map Coloring must have different
countries) Blue, Yellow}
colors
Positions of queens {1–N} for each No two queens can
N-Queens
on a chessboard row attack each other
Time slots or No overlapping,
Scheduling Tasks or meetings
rooms resource constraints
Letters (A, B, C, Unique digits, satisfy the
Cryptarithmetic {0–9}
etc.) sum equation
Solving Techniques
1. Backtracking Search
Depth-first search where variables are assigned one at a time.
If an assignment leads to a conflict, backtrack to try a different value.
2. Forward Checking
Each time a variable is assigned, eliminate inconsistent values from
domains of connected variables.
Prevents dead-ends early.
3. Constraint Propagation (Arc Consistency)
Uses algorithms like AC-3 to propagate constraints and reduce domains.
Keeps the problem as “tight” as possible by enforcing local consistency.
4. Heuristics for Efficiency
Minimum Remaining Values (MRV): Choose variable with the fewest
legal values.
Degree Heuristic: Choose variable involved in the most constraints.
Least Constraining Value: Choose the value that leaves the most
options open.
5. Local Search for CSPs
Example: Min-Conflicts Heuristic
Start with a complete but inconsistent assignment and iteratively fix
conflicts.
Works well for large problems (e.g., 1,000-queen problem).
REAL WORLD APPLICATION
Navigation Systems: Google Maps uses AI.
Robotics: Pathfinding for robot movement.
Games: AI opponents in games like Chess.
Scheduling: Exams, airlines, manufacturing.
CHALLENGES AND LIMITATION
Scalability: Large problems can be intractable.
Incomplete Information: Partial observability complicates solving.
Dynamic Environments: Require adaptive algorithms.
Heuristic Quality: Bad heuristics lead to poor performance.
REFERENCES
Russell, S., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach
(3rd ed.).
Poole, D., & Mackworth, A. (2010). Artificial Intelligence: Foundations of
Computational Agents.
Online AI courses and tutorials (Coursera, edX, etc.)