Module 1: Problem Solving and Search
in Artificial Intelligence (Detailed Notes)
1. Introduction to AI
1.1 Overview of Artificial Intelligence (AI):
AI is a branch of computer science aimed at building machines that can perform tasks that
typically require human intelligence. These tasks include problem-solving, learning,
perception, language understanding, and decision-making. AI is classified into:
- Narrow AI: Specialized in one task (e.g., Siri, Google Translate).
- General AI: Mimics human intelligence across various domains.
- Super AI: Hypothetical AI that surpasses human intelligence.
Applications of AI include:
- Healthcare (diagnosis systems)
- Finance (fraud detection)
- Transportation (autonomous vehicles)
- Education (adaptive learning systems)
1.2 Turing Test:
Proposed by Alan Turing in 1950, the Turing Test assesses a machine's ability to exhibit
human-like intelligence. In this setup:
- A human judge interacts with both a human and a machine via a terminal.
- If the judge cannot reliably distinguish the machine from the human, the machine is said to
have passed the test.
Although foundational, the Turing Test has limitations as it measures imitation, not actual
intelligence or consciousness.
1.3 Intelligent Agents:
An agent is any entity that perceives its environment via sensors and acts upon that
environment through actuators.
An intelligent agent is one that selects actions that maximize its performance based on some
goals.
Types of Intelligent Agents:
1. Simple Reflex Agents: Respond to current percepts only.
2. Model-based Reflex Agents: Maintain internal state for better decision-making.
3. Goal-based Agents: Choose actions to achieve specific goals.
4. Utility-based Agents: Optimize performance using utility functions.
5. Learning Agents: Improve performance over time via learning modules.
2. Problem Solving by Searching
AI often solves problems by modeling them as search problems. A problem can be defined
as a tuple:
- Initial state: Starting point.
- Actions: Possible moves.
- Transition model: Describes the result of actions.
- Goal test: Checks if the goal is achieved.
- Path cost: A numerical cost associated with the solution path.
Example: In the 8-puzzle, the goal is to move tiles on a 3x3 grid to reach the goal
configuration.
3. Uninformed (Blind) Search Strategies
These search methods do not use any domain-specific knowledge. They explore the search
space blindly.
a) Breadth-First Search (BFS):
- Explores all nodes at current depth before going deeper.
- Uses a queue (FIFO).
- Complete and optimal if cost is uniform.
- Time/Space Complexity: O(b^d), where b is branching factor, d is depth.
b) Depth-First Search (DFS):
- Explores as far as possible down each branch.
- Uses a stack (LIFO) or recursion.
- Not guaranteed to find the shortest path.
- May get stuck in infinite loops.
- Time: O(b^m), Space: O(m).
c) Depth-Limited and Iterative Deepening DFS (DFID):
- DFS with a depth limit to prevent infinite descent.
- DFID: Repeated DFS with increasing depth.
- Combines the benefits of BFS and DFS.
4. Informed (Heuristic) Search Strategies
These use additional knowledge (heuristics) to guide search.
a) Generate and Test:
- Generates possible solutions and tests each.
- Simple but can be inefficient.
b) Best-First Search:
- Selects the node with the best heuristic value.
- Greedy approach.
c) Beam Search:
- Keeps only a fixed number of best candidates at each level.
- Reduces memory use but can miss optimal solutions.
d) Hill Climbing:
- Selects the best neighboring state.
- Fast but can get stuck in local maxima, plateaus, or ridges.
e) A* Search:
- f(n) = g(n) + h(n), where:
- g(n): Cost from start to node n.
- h(n): Estimated cost from n to goal.
- Complete and optimal if h(n) is admissible.
- Widely used in maps, robotics, etc.
5. Problem Reduction Search
Problem is broken into smaller subproblems using AND/OR trees.
a) AND/OR Graphs:
- OR nodes: Solve any one of the branches.
- AND nodes: Must solve all child branches.
- Useful in complex problem decomposition.
b) AO* Algorithm:
- Like A*, but for AND/OR graphs.
- Uses heuristic cost to guide search.
- Updates parent nodes when child nodes change.
6. Constraint Satisfaction Problems (CSPs)
CSPs consist of:
- Variables: X1, X2, ..., Xn
- Domains: D1, D2, ..., Dn (values for each variable)
- Constraints: Restrictions (e.g., X1 ≠ X2)
Examples:
- Map coloring: No two adjacent regions have the same color.
- Sudoku: Numbers must be unique in rows, columns, and boxes.
Solving Techniques:
- Backtracking: Try values, backtrack on failure.
- Forward checking: Eliminate inconsistent values ahead.
- Constraint propagation: Apply constraints early to reduce choices.
7. Means-Ends Analysis
A goal-directed problem-solving technique. Compares current state with the goal and
applies operators that reduce the difference.
Used in automated planning and reasoning systems like STRIPS.
Example: If a robot's goal is to pick an item, it evaluates how far it is from the item and
moves accordingly.
8. Stochastic Search Methods
Use randomness to explore solution space.
a) Simulated Annealing:
- Based on the annealing process in metallurgy.
- Occasionally accepts worse solutions to escape local optima.
- Acceptance probability decreases over time (cooling schedule).
b) Particle Swarm Optimization (PSO):
- Population-based optimization inspired by birds/fish.
- Each particle adjusts position based on its own best and global best.
- Used in function optimization problems.
9. Game Playing in AI
Games like Chess and Tic-Tac-Toe can be modeled as search problems.
a) Minimax Algorithm:
- Assumes two players: MAX (tries to maximize score) and MIN (tries to minimize).
- Constructs game tree to compute best move.
b) Alpha-Beta Pruning:
- Improves Minimax by cutting off branches that won’t affect the final result.
- Same result as Minimax with less computation.