Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
15 views18 pages

Artificial Intelligence Unit2

Ai unit 2 has all the notes of sppu syllabus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views18 pages

Artificial Intelligence Unit2

Ai unit 2 has all the notes of sppu syllabus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Unit II: Problem-solving

1. Solving Problems by Searching

Definition: Solving problems by searching is a fundamental paradigm in Artificial


Intelligence where a problem is represented as a state space, and the solution is found by
systematically exploring this space. The goal is to find a path from an initial state to a
goal state. This process involves defining the problem, constructing the search space,
and then employing an algorithm to navigate through it. Search is a powerful and
versatile method used in a wide range of AI applications, from game playing to route
planning.

Detailed Explanation:

●​ State Space: The set of all possible configurations or states that an agent can be in.
For a chessboard, each possible arrangement of pieces is a state.
●​ Initial State: The starting point of the search.
●​ Goal State: The desired configuration or a set of conditions that define a solution.
There can be multiple goal states.
●​ Actions/Operators: The set of moves or transitions that an agent can make to
move from one state to another.
●​ Path: A sequence of actions leading from the initial state to a goal state. A solution
is a path from the initial state to a goal state.
●​ Search Algorithm: The method used to explore the state space. It determines
which path to explore next.

Mathematical / Technical Aspect:

●​ State Space Graph: A problem can be modeled as a directed graph where nodes
represent states and edges represent actions. The search problem is to find a path
from the start node to a goal node.
●​ Cost Function: Each action can have an associated cost, and the goal is often to
find a path with the minimum total cost. The path cost is the sum of the costs of
the actions in the path: Cost(path)=sum_i=1kCost(action_i)

Real-World Examples:
1.​ Route Finding: Finding the shortest route from one city to another, where cities
are states and roads are actions.
2.​ Puzzle Games: Solving a Rubik's Cube or the 8-puzzle, where each configuration of
the puzzle is a state and a move is an action.
3.​ Game Playing: Finding a winning sequence of moves in chess, where each board
position is a state and a move is an action.

Applications:

●​ Robotics: Path planning for robots to navigate through an environment.


●​ Logistics and Supply Chain: Optimizing delivery routes to minimize time and fuel
costs.
●​ Bioinformatics: Searching for a specific DNA sequence in a large genome.

Important Notes / Key Takeaways:

●​ The efficiency of a search algorithm depends on the size of the state space.
●​ A well-defined problem is crucial for an effective search.
●​ The search process can be visualized as exploring a tree or graph.

2. Problem-Solving Agents

Definition: A problem-solving agent is a type of goal-based agent that uses a search


algorithm to find a sequence of actions that leads from the current state to a goal state.
These agents operate in a structured manner by first formulating a clear problem and
then executing the solution. They are particularly well-suited for environments that are
static, deterministic, and discrete, where the agent has enough time to plan a complete
sequence of actions before beginning.

Detailed Explanation:

●​ Goal Formulation: The first step for a problem-solving agent is to define the goal it
wants to achieve. For example, a robot's goal might be to move a block from one
table to another.
●​ Problem Formulation: Once a goal is set, the agent formulates a specific problem
by defining the state space, initial state, actions, and goal test.
●​ Search: The agent uses a search algorithm to find a solution, which is a path from
the initial state to the goal state.
●​ Execution: The agent executes the solution path by performing the sequence of
actions found by the search algorithm.
●​ Monitoring: After execution, a problem-solving agent may monitor the
environment to ensure the plan is still valid and react to any unexpected changes.

Real-World Examples:

1.​ Chess-Playing Program: The agent formulates the goal of winning the game,
searches for a sequence of moves to checkmate the opponent, and then executes
the best move it finds.
2.​ Automated Warehouse Robot: The agent's goal is to retrieve an item. It formulates
a problem by representing the warehouse as a grid, searches for the shortest path
to the item's location, and then moves along that path.
3.​ Airline Trip Planner: The agent's goal is to find a flight plan. It formulates a
problem by defining airports as states and flights as actions, searches for a
sequence of flights to the destination, and presents the solution to the user.

Applications:

●​ Planning and Scheduling: Creating schedules for tasks or projects, such as factory
production lines or class schedules.
●​ Route Planning: Applications like Google Maps use problem-solving agents to find
routes.
●​ Game AI: The AI in many video games uses a problem-solving approach to navigate
and perform tasks.

Important Notes / Key Takeaways:

●​ Problem-solving agents assume a perfect world where their actions have


predictable outcomes.
●​ They are most effective in structured environments where they can plan a
complete solution.
●​ The success of a problem-solving agent depends on the quality of its problem
formulation and the efficiency of its search algorithm.
3. Example Problems

Definition: Example problems are simplified, canonical problems used in AI to illustrate


and test different search algorithms. These problems are designed to be easy to
understand and model, allowing researchers and students to focus on the performance
and characteristics of the algorithms themselves. They often involve a clear state space, a
well-defined set of actions, and one or more goal states.

Detailed Explanation:

●​ 8-Puzzle:
○​ Original Phrase: A classic sliding-tile puzzle on a 3x3 grid.
○​ Explanation: The puzzle consists of a 3x3 frame containing 8 numbered tiles
and one empty space. The goal is to slide the tiles to arrange them in a
specific order.
○​ Extra Note: The state space for the 8-puzzle is relatively small (362,880
possible states), making it a good problem for demonstrating basic search
algorithms.
●​ Missionaries and Cannibals:
○​ Original Phrase: A river-crossing puzzle involving missionaries, cannibals,
and a boat.
○​ Explanation: Three missionaries and three cannibals are on one side of a
river. They need to cross the river using a boat that can hold at most two
people. The constraint is that if cannibals outnumber missionaries on either
side, the missionaries will be eaten. The goal is to get everyone to the other
side safely.
○​ Extra Note: This problem is a good example of a constraint satisfaction
problem and is used to test logical reasoning in search algorithms.
●​ Romania Route Planning:
○​ Original Phrase: Finding the shortest route between two cities in Romania.
○​ Explanation: A map of Romania with several cities connected by roads. The
goal is to find a path from a starting city (e.g., Arad) to a destination city (e.g.,
Bucharest). The problem can be represented as a graph where cities are
nodes and roads are edges with distances as costs.
○​ Extra Note: This problem is famously used to demonstrate the difference
between uninformed and informed search algorithms.
Real-World Examples:

1.​ 8-Puzzle: Can be seen as a simplified model for industrial logistics, such as
arranging containers in a limited space.
2.​ Missionaries and Cannibals: A simple analogy for real-world problems involving
resource management and safety constraints, such as scheduling tasks with
dependencies.
3.​ Romania Route Planning: A direct parallel to real-world navigation systems like
Google Maps, which use similar graph search techniques.

Applications:

●​ Teaching and Research: Example problems are widely used in AI courses and
research papers to illustrate concepts.
●​ Benchmarking: They serve as benchmarks for comparing the performance and
efficiency of different search algorithms.
●​ Developing New Algorithms: Researchers use these problems to test the
effectiveness of new search strategies.

Important Notes / Key Takeaways:

●​ Example problems simplify complex real-world situations to focus on the core


search problem.
●​ They provide a common ground for comparing different AI approaches.
●​ The choice of a good example problem can significantly impact the clarity of a
search algorithm's explanation.

4. Search Algorithms

Definition: A search algorithm is a procedure for exploring a state space to find a path
from an initial state to a goal state. These algorithms are the core of problem-solving
agents and can be categorized into two main groups: uninformed search strategies and
informed (heuristic) search strategies. Search algorithms differ in their completeness
(guaranteeing to find a solution if one exists), optimality (guaranteeing to find the best
solution), time complexity, and space complexity.

Detailed Explanation:
●​ Completeness: A search algorithm is complete if it is guaranteed to find a solution
if one exists.
●​ Optimality: A search algorithm is optimal if it is guaranteed to find the cheapest
solution (the one with the minimum path cost).
●​ Time Complexity: The number of nodes generated by the algorithm. It is a measure
of how long the algorithm takes to run.
●​ Space Complexity: The maximum number of nodes stored in memory at any one
time. It is a measure of how much memory the algorithm requires.
●​ Tree Search vs. Graph Search: Tree search algorithms can explore the same state
multiple times, while graph search algorithms keep track of visited states to avoid
redundant work.

Mathematical / Technical Aspect:

●​ Big O Notation: Used to describe the time and space complexity of search
algorithms. For example, a complexity of O(bd) means the runtime grows
exponentially with the branching factor (b) and the depth of the solution (d).
●​ Node Representation: Each node in the search tree/graph can be represented as a
data structure containing information such as its state, its parent node, the action
that led to it, and its path cost.

Real-World Examples:

1.​ Breadth-First Search (Uninformed): A simple search algorithm that explores all
nodes at the current depth level before moving on to the next level. It's like finding
a path from one point to another by exploring all roads one block away, then all
roads two blocks away, and so on.
2.​ *A Search (Informed):** A more advanced algorithm that uses a heuristic to guide
its search towards the goal, like a person using a map to estimate the distance to
their destination.
3.​ Genetic Algorithms (Local Search): A type of search algorithm inspired by
evolution, used to find approximate solutions to optimization problems, such as
finding the best design for an airplane wing.

Applications:

●​ Game AI: Used to find winning moves or paths for characters.


●​ Network Routing: Finding the best path for data packets to travel across a
network.
●​ Computational Biology: Searching for optimal protein folding configurations.

Important Notes / Key Takeaways:

●​ The choice of search algorithm is a trade-off between efficiency and finding the
optimal solution.
●​ The effectiveness of an algorithm depends on the characteristics of the problem.
●​ Informed search algorithms generally outperform uninformed ones, especially for
large state spaces.

5. Uninformed Search Strategies

Definition: Uninformed search strategies, also known as blind search, are a class of
search algorithms that do not use any domain-specific knowledge or "hints" to guide
their search. They explore the state space in a systematic but brute-force manner,
relying only on the information available in the problem definition itself. These strategies
are often simple to implement but can be very inefficient for large state spaces.

Detailed Explanation:

●​ Breadth-First Search (BFS):


○​ Original Phrase: Explores the shallowest nodes first.
○​ Explanation: BFS expands all the nodes at the current depth level before
moving on to the next level. It uses a queue to manage the nodes to be
expanded.
○​ Extra Note: BFS is complete and optimal if all action costs are equal, but its
space and time complexity can be very high.
●​ Uniform-Cost Search:
○​ Original Phrase: Expands the node with the lowest path cost.
○​ Explanation: This algorithm is a generalization of BFS where it considers the
cost of each path, not just the number of steps. It uses a priority queue to
always expand the cheapest path first.
○​ Extra Note: Uniform-Cost Search is complete and optimal.
●​ Depth-First Search (DFS):
○​ Original Phrase: Explores the deepest nodes first.
○​ Explanation: DFS explores as far as possible down a single path before
backtracking. It uses a stack to manage the nodes to be expanded.
○​ Extra Note: DFS is not complete (it can get stuck in an infinite loop) and is
not optimal, but it requires much less memory than BFS.
●​ Depth-Limited Search:
○​ Original Phrase: DFS with a predefined depth limit.
○​ Explanation: This algorithm is a variation of DFS that avoids infinite loops by
not exploring paths beyond a certain depth.
○​ Extra Note: It is not complete if the solution is deeper than the limit and is
not optimal.
●​ Iterative Deepening Depth-First Search (IDDFS):
○​ Original Phrase: Combines the benefits of BFS and DFS.
○​ Explanation: IDDFS performs a series of depth-limited searches, with the
depth limit increasing by one in each iteration. It finds the solution in the
first shallowest search that reaches it.
○​ Extra Note: IDDFS is complete and optimal, and it has the low memory usage
of DFS.

Real-World Examples:

1.​ Breadth-First Search: Can be used to find the shortest path in an unweighted
graph, such as finding the shortest connection between two people on a social
media network.
2.​ Uniform-Cost Search: A delivery company might use this to find the cheapest
route to a destination, considering fuel costs and tolls.
3.​ Iterative Deepening Search: Can be used in game-playing AI for games with a large
state space but a relatively shallow solution depth.

Applications:

●​ Maze Solving: Simple maze-solving algorithms often use uninformed search


strategies.
●​ Web Crawling: A simple web crawler might use BFS to explore all pages linked from
a starting page up to a certain depth.
●​ Network Analysis: Finding the shortest path in a network with equal link costs.

Important Notes / Key Takeaways:


●​ Uninformed search strategies are a good starting point but are often too slow for
complex real-world problems.
●​ The choice of a specific uninformed search algorithm depends on the constraints
of the problem, particularly the size of the state space and memory limitations.
●​ IDDFS is generally the preferred uninformed search algorithm due to its
completeness, optimality, and memory efficiency.

6. Informed (Heuristic) Search Strategies

Definition: Informed search strategies, also known as heuristic search, are a class of
algorithms that use domain-specific knowledge to guide their search. This knowledge,
called a heuristic function, provides an estimate of the cost to reach the goal from any
given state. By using this information, informed search algorithms can explore the state
space much more efficiently than uninformed search, often finding a solution in a
fraction of the time.

Detailed Explanation:

●​ Best-First Search:
○​ Original Phrase: Explores the node that appears to be "best" according to a
heuristic.
○​ Explanation: This is a general search strategy that always expands the node
with the best evaluation function. It's a greedy approach that doesn't
consider the path cost from the start.
○​ Extra Note: Best-First Search is not optimal and can get stuck in local
minima, where the heuristic leads it down a suboptimal path.
●​ Greedy Best-First Search:
○​ Original Phrase: Uses a heuristic function to estimate the cost to the goal.
○​ Explanation: This is a specific type of best-first search where the evaluation
function is just the heuristic function, h(n). It expands the node that is closest
to the goal, according to the heuristic.
○​ Extra Note: It is often not optimal, but it can be very fast.
●​ A* Search:
○​ Original Phrase: Combines uniform-cost search with greedy best-first
search.
○​ Explanation: A* search is a complete and optimal algorithm that finds the
cheapest path from the start to the goal. It uses an evaluation function f(n)
that is the sum of the path cost from the start, g(n), and the heuristic
estimate to the goal, h(n). The formula is: f(n)=g(n)+h(n).
○​ Extra Note: The performance of A* is highly dependent on the quality of the
heuristic function. A good heuristic leads to a much faster search.

Mathematical / Technical Aspect:

●​ Evaluation Function: The function used by an informed search algorithm to


determine which node to expand next. For A*, the function is f(n)=g(n)+h(n).
●​ Admissibility: A heuristic function is admissible if it never overestimates the cost
to reach the goal. For A* to be optimal, the heuristic must be admissible.
●​ Consistency: A heuristic is consistent if its estimate is always "triangle-inequality"
true for any path. A consistent heuristic is also admissible.

Real-World Examples:

1.​ A* Search: Used in GPS navigation systems to find the fastest route, where the
heuristic is the straight-line distance to the destination.
2.​ Greedy Best-First Search: An AI in a video game might use this to find a quick path
to a character's destination, ignoring obstacles for the sake of speed.
3.​ Robotic Path Planning: A robot in a factory might use an A* search algorithm to
navigate around obstacles, with a heuristic that estimates the distance to the
target location.

Applications:

●​ Video Games: AI for non-player characters (NPCs) uses informed search to find
paths.
●​ Logistics and Shipping: Finding optimal routes for delivery trucks.
●​ Image Recognition: Using heuristics to guide the search for patterns in an image.

Important Notes / Key Takeaways:

●​ Informed search strategies are much more efficient for large state spaces.
●​ The quality of the heuristic function is the single most important factor for the
performance of an informed search algorithm.
●​ A* search is widely considered the most effective general-purpose informed search
algorithm.

7. Heuristic Functions

Definition: A heuristic function, often denoted as h(n), is a function that estimates the
cost of the cheapest path from a node n to a goal state. It is the key component of an
informed search algorithm, providing the "guess" or "rule of thumb" that guides the
search towards a solution. The quality of a heuristic function—how accurate its estimates
are—directly impacts the efficiency and, in some cases, the optimality of the search
algorithm.

Detailed Explanation:

●​ Admissible Heuristic:
○​ Original Phrase: Never overestimates the cost to the goal.
○​ Explanation: An admissible heuristic is one where the estimated cost h(n) is
always less than or equal to the true cost to the goal. For example, the
straight-line distance is an admissible heuristic for route planning.
○​ Extra Note: Using an admissible heuristic with A* search guarantees finding
an optimal solution.
●​ Consistent Heuristic:
○​ Original Phrase: Obeys the triangle inequality.
○​ Explanation: A heuristic is consistent if for every node n and every successor
node n′ of n, the estimated cost h(n) is less than or equal to the step cost
from n to n′ plus the estimated cost from n′ to the goal
(h(n)leCost(n,n′)+h(n′)).
○​ Extra Note: Consistency is a stronger condition than admissibility. Any
consistent heuristic is also admissible.
●​ Dominance:
○​ Original Phrase: One heuristic is better than another if it's more accurate.
○​ Explanation: Heuristic h_2 dominates h_1 if for every node n,
h_2(n)geh_1(n). A dominating heuristic provides a more accurate estimate
and will lead to a more efficient search.
○​ Extra Note: When choosing a heuristic, a dominating one is always preferred.
Real-World Examples:

1.​ Route Planning (e.g., Google Maps): The straight-line distance between two cities
is a common and effective heuristic. It's admissible because you can't get there in a
shorter distance than the straight line.
2.​ 8-Puzzle: Two common heuristics are:
○​ Number of Misplaced Tiles: Counting how many tiles are not in their final
position.
○​ Manhattan Distance: Summing the grid-distance of each tile from its goal
position. The Manhattan distance is a better heuristic as it is more accurate.
3.​ Robotic Arm: The straight-line distance between the robot's current gripper
position and the target object's position is an admissible heuristic for planning the
arm's movement.

Applications:

●​ Game AI: Heuristics are used to evaluate the "goodness" of a game state, such as
the number of pieces a player has in chess.
●​ Search Algorithms: Heuristic functions are essential for the performance of
algorithms like A*, IDA*, and Best-First Search.
●​ Constraint Satisfaction Problems: Heuristics can be used to guide the search for a
solution.

Important Notes / Key Takeaways:

●​ A good heuristic is crucial for the efficiency of informed search.


●​ The straight-line distance is a powerful and common heuristic for problems with a
geometric component.
●​ Admissibility is a key property that guarantees the optimality of A* search.

8. Search in Complex Environments

Definition: Complex environments are those that are not static, deterministic, and fully
observable, making traditional search algorithms ineffective. To deal with these
complexities, AI agents need to adapt their search strategies. This often involves
planning to handle unexpected outcomes, acting with uncertainty, and continuously
monitoring the environment to adjust plans on the fly. These methods are crucial for
creating agents that can operate in the real world.

Detailed Explanation:

●​ Contingency Planning:
○​ Original Phrase: Planning for different possible outcomes of an action.
○​ Explanation: In stochastic environments where actions don't always have the
same result, an agent must create a plan with branches for each possible
outcome. For example, a robot might plan a sequence of actions but have a
different path planned if it unexpectedly slips on a wet surface.
○​ Extra Note: This type of planning is necessary when the agent can't assume a
deterministic world.
●​ Exploration vs. Exploitation:
○​ Original Phrase: The trade-off between trying new things and sticking with
what works.
○​ Explanation: In unknown environments, an agent must decide whether to
explore new states to gain more knowledge or to exploit its current
knowledge to find a goal.
○​ Extra Note: This is a fundamental challenge in reinforcement learning, where
an agent needs to balance discovering better actions with using its current
best-known strategy.
●​ Online Search:
○​ Original Phrase: The agent acts and searches at the same time.
○​ Explanation: In a dynamic environment, an agent cannot afford to sit and
plan for a long time. Online search interleaves search and execution, where
the agent first plans a few steps, acts, and then re-evaluates the situation
before planning the next steps.
○​ Extra Note: This is a necessary approach for real-time systems like
self-driving cars, where the environment is constantly changing.

Real-World Examples:

1.​ Self-Driving Car: Operates in a complex, dynamic, and partially observable


environment. It must constantly monitor traffic, pedestrians, and road conditions,
and adjust its plan in real-time.
2.​ Robotics in a Factory: A robot may have a pre-planned sequence of movements,
but if it drops an item (a stochastic outcome), it needs to have a contingency plan
to pick it up.
3.​ Web-Based AI: An agent exploring a website to find specific information may use
an online search approach, as it can only see one page at a time and must make
decisions as it goes.

Applications:

●​ Robotics: Planning for robots in unpredictable physical environments.


●​ Autonomous Systems: Self-driving cars and drones that must react to real-time
events.
●​ Game AI: Creating intelligent opponents that can adapt to a player's changing
strategy.

Important Notes / Key Takeaways:

●​ Complex environments require more sophisticated search and planning strategies


than simple, static ones.
●​ The ability to handle uncertainty and adapt to new information is a hallmark of
truly intelligent agents.
●​ Online search is a key technique for agents operating in dynamic, real-time
environments.

9. Local Search and Optimization Problems

Definition: Local search algorithms are a family of search algorithms that are used to find
an optimal solution in a large or continuous state space. Unlike systematic search
algorithms that maintain and explore multiple paths, local search algorithms operate on a
single current state and try to improve it by moving to a neighboring state. They are
particularly well-suited for optimization problems where the goal is to find the best state
rather than a path to it.

Detailed Explanation:

●​ Hill-Climbing Search:
○​ Original Phrase: Moves to a neighboring state with the best value.
○​ Explanation: This is the simplest local search algorithm. It starts at an initial
state and iteratively moves to a neighboring state that increases the value of
the objective function. It stops when it reaches a "peak" where no
neighboring state has a higher value.
○​ Extra Note: Hill-climbing is prone to getting stuck in local maxima (a peak
that is not the highest peak in the entire space).
●​ Simulated Annealing:
○​ Original Phrase: A metaheuristic that avoids getting stuck in local maxima.
○​ Explanation: Inspired by the annealing process in metallurgy, this algorithm
allows for "downhill" moves (moving to a worse state) with a certain
probability. The probability decreases over time, making it less likely to move
to a worse state as the search progresses.
○​ Extra Note: Simulated annealing is more likely to find a global optimum than
hill-climbing.
●​ Genetic Algorithms:
○​ Original Phrase: An evolutionary-inspired optimization algorithm.
○​ Explanation: This algorithm works on a population of potential solutions. It
uses concepts like selection, crossover, and mutation to generate new, and
hopefully better, solutions over time. It mimics the process of natural
selection.
○​ Extra Note: Genetic algorithms are effective for a wide range of complex
optimization problems where the state space is too large for systematic
search.

Real-World Examples:

1.​ Hill-Climbing Search: Can be used to find the best configuration of a neural
network by adjusting its weights, but it might get stuck in a suboptimal
configuration.
2.​ Simulated Annealing: Used in circuit design to find the optimal placement of
components on a circuit board to minimize wire lengths and other costs.
3.​ Genetic Algorithms: A company might use this to find the most efficient layout for
a factory, where the goal is to minimize the distance workers have to walk.

Applications:
●​ Optimization: Finding the minimum or maximum value of a function, such as
finding the best parameters for a machine learning model.
●​ Traveling Salesperson Problem: Finding the shortest possible route that visits a
list of cities and returns to the origin.
●​ Scheduling: Optimizing class schedules to minimize conflicts.

Important Notes / Key Takeaways:

●​ Local search algorithms are memory-efficient because they only store one or a few
states.
●​ They are well-suited for optimization problems, where the path to the solution
doesn't matter.
●​ Hill-climbing is simple but flawed; simulated annealing and genetic algorithms are
more robust alternatives for finding a global optimum.

Summary Table
Main Topic Brief Definition 3 Real-World Example
Name

Solving A fundamental AI paradigm that 1. Route Finding: Finding the shortest path
Problems by represents a problem as a state space and from one city to another. <br> 2. Puzzle
Searching finds a solution by exploring a path from Games: Solving a Rubik's Cube or the
an initial state to a goal state. It involves 8-puzzle. <br> 3. Game Playing: Finding a
defining the problem and using an winning sequence of moves in chess.
algorithm to navigate the state space. It is
a powerful method for a wide range of AI
applications.

Problem-Solvin A type of goal-based agent that uses a 1. Automated Warehouse Robot: Finds the
g Agents search algorithm to find a sequence of shortest path to an item and executes the
actions to reach a goal. It operates by first path. <br> 2. Airline Trip Planner: Finds a
formulating a problem and then executing sequence of flights to a destination. <br> 3.
the solution found. These agents are Chess-Playing Program: Formulates the
well-suited for environments that are static goal of winning and searches for a sequence
and deterministic, where planning a full of moves to achieve it.
sequence of actions is possible.
Example Simplified, canonical problems used to 1. 8-Puzzle: A sliding-tile puzzle used to test
Problems illustrate and test search algorithms. They basic search algorithms. <br> 2.
are designed to be easy to understand and Missionaries and Cannibals: A
model, allowing researchers to focus on river-crossing puzzle used to test logical
the performance of the algorithms. They reasoning in search. <br> 3. Romania Route
often have clear state spaces and Planning: Finding a path between two cities
well-defined actions and goals. on a map, used to demonstrate informed
search.

Search A procedure for exploring a state space to 1. Breadth-First Search: Finds the shortest
Algorithms find a path from a start to a goal state. path in an unweighted graph. <br> 2. A*
They can be classified as uninformed or Search: Used in GPS navigation to find the
informed. Algorithms are judged on their fastest route. <br> 3. Genetic Algorithms:
completeness (finding a solution), Used to find optimal designs for products like
optimality (finding the best solution), time airplane wings.
complexity, and space complexity. The
choice of algorithm is a trade-off between
these factors.

Uninformed Search algorithms that do not use any 1. BFS: Finding the shortest connection
Search domain-specific knowledge to guide their between two people on a social media
Strategies search. They explore the state space in a network. <br> 2. Uniform-Cost Search: A
brute-force manner. Examples include delivery company finding the cheapest route
Breadth-First Search, Depth-First Search, considering fuel costs. <br> 3. IDDFS: Used
and Iterative Deepening Depth-First in game-playing AI for games with a large
Search. They are simple but often state space but a shallow solution depth.
inefficient for large state spaces.

Informed Search algorithms that use a heuristic 1. A* Search: A GPS navigation system
(Heuristic) function to estimate the cost to the goal, using straight-line distance as a heuristic.
Search guiding the search more efficiently. The <br> 2. Greedy Best-First Search: A video
Strategies most common is A* search, which game AI finding a quick path to a character's
combines the path cost with the heuristic destination. <br> 3. Robotic Path Planning:
estimate to find an optimal solution. These A robot using a heuristic to estimate the
algorithms are much faster for large state distance to its target location.
spaces.
Heuristic A function, h(n), that estimates the cost of 1. Route Planning: Straight-line distance
Functions the cheapest path from a node n to the between two cities. <br> 2. 8-Puzzle: The
goal. A good heuristic guides a search number of misplaced tiles or the Manhattan
algorithm efficiently. An admissible distance. <br> 3. Robotic Arm: Straight-line
heuristic never overestimates the cost, distance from the gripper to the object.
ensuring the optimality of algorithms like
A*. The straight-line distance is a common
example.

Search in Adapting search strategies for 1. Self-Driving Car: Constantly monitoring a


Complex environments that are not static or dynamic environment and adjusting its plan.
Environments deterministic. This involves techniques like <br> 2. Factory Robot: Has a contingency
contingency planning (planning for multiple plan in case an item is dropped. <br> 3.
outcomes), online search (acting while Web-Based AI: An agent exploring a website
searching), and balancing exploration with and making decisions in real-time.
exploitation. These methods are essential
for real-world AI applications where
uncertainty is a factor.

Local Search Search algorithms that operate on a single 1. Hill-Climbing: Adjusting neural network
and state and try to improve it by moving to a weights to find the best configuration. <br> 2.
Optimization neighboring state, rather than finding a Simulated Annealing: Optimizing the
Problems path. They are used for optimization placement of components on a circuit board.
problems where the goal is to find the best <br> 3. Genetic Algorithms: Finding the
state. Examples include hill-climbing, most efficient layout for a factory.
simulated annealing, and genetic
algorithms.

You might also like