Wednesday, 26 February 2025
AI
a) Searching is important for solving a problem in AI.
TRUE. Searching is fundamental to AI problem-solving because it provides a systematic way
to explore possible solutions in a state space. For example, in path nding problems like
navigating from city A to city B, search algorithms like A* can ef ciently nd optimal routes
by exploring different path possibilities. Without search mechanisms, AI systems would have
no methodical way to explore and evaluate alternative solutions, making complex problem-
solving nearly impossible.
b) Higher time complexity in nding a solution indicates
that the solution thus found will be suboptimal.
FALSE. Time complexity and solution optimality are independent concepts. For example, A*
search may take longer (higher time complexity) than greedy best- rst search but produces
optimal paths. Similarly, UCS (Uniform Cost Search) has higher time complexity than BFS
in many cases but guarantees optimal solutions for weighted graphs. The time taken to nd a
solution re ects the algorithm's ef ciency, not the quality of the solution it produces.
c) "Order" of the nodes visited using any search algorithm
is only necessary to nd out the "path" of the solution.
FALSE. The order of nodes visited serves multiple purposes beyond just nding the solution
path. For example, in breadth- rst search, the visitation order ensures we nd the shortest
path in unweighted graphs. In algorithms like alpha-beta pruning for game trees, the order of
node evaluation signi cantly impacts ef ciency by enabling earlier pruning. The visitation
order also helps in analyzing algorithm performance, debugging, and understanding search
space exploration patterns.
d) Non-admissible heuristic functions never produce
optimal solution.
TRUE. A non-admissible heuristic overestimates the cost to reach the goal, which can
mislead the search algorithm to favor suboptimal paths. For example, in the A* algorithm for
route nding, if a heuristic overestimates the distance to the destination, it might cause A* to
avoid the truly optimal path. This is because A* relies on the heuristic to be an underestimate
(or exact estimate) of the remaining cost to ensure optimality. When this condition is violated,
A* can no longer guarantee nding the optimal solution.
1
fi
fl
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
e) Multi-solution based search techniques are
advantageous over single solution based ones.
TRUE. Multi-solution approaches offer several advantages, including robustness against
local optima and solution diversity. For example, genetic algorithms maintain a population of
solutions, allowing them to overcome local optima that might trap hill-climbing algorithms.
In constraint satisfaction problems, having multiple solution pathways allows the system to
recover when one approach fails due to constraints. Multi-solution techniques also allow for
greater exploration of the search space, increasing the likelihood of nding better quality
solutions in complex problem domains.
1. Turing Test is a one-sided test.
TRUE. The Turing Test is one-sided because it only evaluates the machine's ability to imitate
human intelligence, not the human's ability to demonstrate intelligence. For example, in the
standard test, a human judge interacts with both a human and a machine without knowing
which is which. The machine passes if the judge cannot reliably distinguish between them.
The test places the burden of proof entirely on the machine to demonstrate human-like
responses, making it fundamentally one-sided in its evaluation criteria.
2. Higher time complexity in nding a solution indicates
that the solution thus found will be optimal.
FALSE. Time complexity and solution optimality are independent properties. For example,
greedy algorithms often have lower time complexity (like O(n log n)) but produce suboptimal
solutions, while dynamic programming approaches might have higher complexity (like O(n²))
yet guarantee optimality. Brute force search has exponential complexity but ensures optimal
solutions, whereas approximate algorithms can have linear complexity but produce
suboptimal results. The computational expense of an algorithm doesn't inherently guarantee
better solution quality.
3. "Order" of the nodes visited using any search algorithm
is only necessary to nd out the "path" of the solution.
FALSE. Node visitation order serves multiple purposes beyond path reconstruction. For
example, in BFS, the visitation order guarantees shortest paths in unweighted graphs. In
informed search algorithms like A*, the order directly impacts ef ciency by prioritizing
promising paths. The order also provides insights into algorithm behavior for analysis and
optimization purposes. Additionally, in problems like constraint satisfaction, the ordering of
variable assignments can dramatically affect backtracking ef ciency.
4. Two lists, OPEN and CLOSED, are necessary for
General Graph Search Algorithm.
2
fi
fi
fi
fi
fi
TRUE. The OPEN list maintains nodes that have been discovered but not yet explored, while
the CLOSED list tracks already explored nodes. This dual-list structure is essential for
preventing cycles and repeated work in graph search. For example, in A* search through a
maze with cycles, without the CLOSED list to track visited states, the algorithm could
repeatedly explore the same locations, potentially causing in nite loops. The OPEN list
prioritizes which nodes to explore next, while the CLOSED list ensures the search progresses
ef ciently without redundancy.
5. The average time complexity of any search algorithm is
exponential in nature.
FALSE. Different search algorithms have different complexity classes. For example, BFS
and DFS have O(V+E) complexity on graphs with V vertices and E edges, which is
polynomial. Dijkstra's algorithm runs in O(E log V) time with a binary heap implementation.
While some algorithms like brute-force search for NP-hard problems have exponential
complexity, many practical search algorithms operate with polynomial ef ciency. The
complexity depends on the algorithm design and problem characteristics, not an inherent
exponential nature of all search.
6. In certain scenarios Iterative Broadening search
algorithm can outperform Iterative Deepening search.
TRUE. Iterative Broadening limits the number of children expanded at each node, which can
be advantageous in problems with high branching factors. For example, in a game tree where
only a few moves at each position are promising, Iterative Broadening can avoid wasting
time on poor moves that Iterative Deepening would explore. In natural language processing
tasks like parsing, where each word might have many interpretations but only a few are
contextually relevant, iterative broadening's selective expansion approach can be more
ef cient.
7. Bidirectional search strategy is applicable to all sorts of
problems.
FALSE. Bidirectional search requires a well-de ned goal state that can be worked backward
from. For example, in path nding from city A to city B, we can search both from A toward B
and from B toward A. However, in problems like chess where the goal state (checkmate)
could take countless forms, or in exploration problems without a speci c target state,
bidirectional search is not applicable. The strategy also struggles with asymmetric operators
that can't easily be reversed and with problems that have differing forward and backward
branching factors.
8. When the temperature tends to in nity, Simulated
Annealing algorithm boils down to "random" search.
3
fi
fi
fi
fi
fi
fi
fi
fi
TRUE. At in nite temperature, the acceptance probability in Simulated Annealing
approaches 1 for any move. For example, in a traveling salesman problem, the algorithm
would accept every proposed city reordering regardless of whether it increases path length.
This behavior is equivalent to random search, as the algorithm loses its ability to discriminate
between good and bad solutions. The energy difference term in the acceptance probability:
exp(-ΔE/T) approaches 1 as T approaches in nity, causing the algorithm to accept all moves
randomly.
9. Genetic algorithm with population size equal to one is a
"random" search algorithm.
FALSE. A genetic algorithm with a population size of one resembles hill climbing with
random restarts rather than pure random search. For example, in optimization problems, it
would still follow a modi ed mutation-selection pattern where improvements are kept and
detrimental mutations rejected. While it loses the crossover component and population
diversity that characterize genetic algorithms, making it signi cantly less effective, it still
maintains directional selection pressure unlike truly random search, which has no memory or
preference for better solutions.
10. Genetic Algorithm without mutation operator may
lead to premature convergence.
TRUE. Without mutation, genetic algorithms rely solely on crossover, which recombines
existing genetic material but cannot introduce new variations. For example, in optimizing a
neural network's weights, if all solutions in the population converge to similar values early in
the search, crossover alone cannot explore beyond these values. The algorithm gets trapped in
local optima because the population lacks the genetic diversity that mutation provides. This is
especially problematic in deceptive tness landscapes where the global optimum requires
genetic material not present in the initial population.
Hill-Climbing Search: A Detailed Overview
1. Introduction to Hill-Climbing Search
Hill-climbing is a local search algorithm that continuously moves towards increasing values
of an objective function. It is often referred to as steepest-ascent hill climbing because it
selects the best move among all possible successors. Hill climbing is commonly used in
optimization problems where the goal is to nd a peak (or minimum, in case of minimization
problems) in a state-space landscape.
Hill climbing is analogous to trying to reach the summit of a mountain in thick fog while
suffering from amnesia. The algorithm does not maintain a search tree or history; it only
considers the current state and its immediate neighbors.
4
fi
fi
fi
fi
fi
fi
2. Working of Hill-Climbing Algorithm
Algorithm (Steepest-Ascent Version)
1. Start from an initial state.
2. Evaluate the objective function at the current state.
3. Choose the best neighbor with a higher value (steepest ascent) than the current state.
4. Move to the chosen neighbor.
5. Repeat steps 2-4 until no better neighboring state is found (local maximum reached).
6. Return the nal state.
3. Illustration: Hill Climbing in the 8-Queens Problem
In the 8-Queens Problem, the objective is to place 8 queens on a chessboard such that no
two queens attack each other.
• Each state represents an arrangement of queens.
• The heuristic function (h) measures the number of attacking pairs.
• The goal state has h = 0 (no attacking pairs).
Each queen can move within its column, so each state has 8 × 7 = 56 possible successors.
• Example: A state with h = 17 (high con icts) may have neighbors with h = 12 (lower
con icts), making the search move towards them.
4. Challenges in Hill Climbing (foothill wala problems)
Hill climbing can get stuck due to the following reasons:
4.1 Local Maxima
A local maximum is a peak where no neighboring state has a higher value, but the global
maximum (best solution) is still far away.
• Example: A queen arrangement in 8-Queens where h=1 but not 0.
• Consequence: The algorithm halts prematurely, missing the best solution.
4.2 Ridges
A ridge is a sequence of local maxima where the algorithm struggles to move because each
step seems to lead downward.
• Example: In a landscape where the highest peak requires lateral movement, the
algorithm cannot navigate effectively.
• Consequence: The algorithm gets stuck as it only considers vertical ascent, not lateral
movement.
4.3 Plateaux
5
fl
fi
fl
A plateau is a at region where all neighboring states have the same value.
• Example: A chessboard con guration where multiple moves do not change the
heuristic value.
• Consequence: The algorithm cannot make progress and may enter an in nite loop.
5. Strategies to Overcome Challenges
5.1 Allowing Sideways Moves
• If the algorithm allows moving to a state with the same value, it can escape plateaux.
• Example: Allowing up to 100 sideways moves in 8-Queens improves success from
14% to 94%.
• Risk: Can lead to in nite loops if not controlled.
5.2 Random Restarts
• Run multiple hill-climbing searches with different random initial states.
• Example: If success probability per run is 14%, we need roughly 7 iterations to nd
a solution.
• Effectiveness: High probability of reaching the global maximum eventually.
5.3 Stochastic Hill Climbing
• Chooses a neighbor randomly, favoring better moves probabilistically.
• Effect: Avoids getting stuck in a local maximum.
• Example: Used in genetic algorithms where mutations introduce randomness.
5.4 First-Choice Hill Climbing
• Generates random successors until one is better than the current state.
• Useful when the state space is large.
• Example: Applied in large scheduling problems.
6. Complexity and Performance Analysis
6.1 Time Complexity
• If the branching factor is b and the depth of the optimal solution is d, then hill
climbing takes O(bd) in the worst case.
• It generally runs faster than exhaustive search methods.
6.2 Space Complexity
• O(1) (constant space), as it does not maintain a search tree.
• Ef cient for large problem spaces.
6.3 Success Rate and Practical Performance
6
fi
fl
fi
fi
fi
fi
Success Rate (8-Queens
Algorithm Variant Average Steps Required
Problem)
Steepest-Ascent Hill 14% 4 steps (success), 3 steps
Climbing (failure)
Hill Climbing with Sideways 21 steps (success), 64 steps
94%
Moves (failure)
Random-Restart Hill 100% (eventually) ~25 steps per solution
Climbing
7. Hill Climbing vs. Other Search Algorithms
Hill
Feature Simulated Annealing Genetic Algorithm
Climbing
Explores Single
Yes Yes No (Population-based)
Solution?
Handles Local Yes (Temperature-
No Yes (Mutation/Crossover)
Maxima? Based)
No (Needs Population
Memory Ef cient? Yes Yes
Storage)
Optimality Guarantee? No No No, but nds near-optimal
8. Conclusion
Hill-climbing is a powerful, memory-ef cient local search algorithm useful for solving
optimization problems. However, it suffers from local maxima, ridges, and plateaux,
limiting its effectiveness. Variants like random restarts, stochastic selection, and simulated
annealing improve robustness.
In real-world applications, random-restart hill climbing is often the most effective
approach, especially in large, complex search spaces. Although hill climbing lacks global
optimality guarantees, its speed and ef ciency make it a strong choice for problems where
an approximate solution is acceptable.
9. Applications of Hill Climbing
1. Arti cial Intelligence: Game-playing (e.g., Chess, Go, Tic-Tac-Toe)
2. Robotics: Motion planning
3. Machine Learning: Hyperparameter tuning (e.g., Neural Networks)
4. Scheduling Problems: Timetabling, task scheduling
5. Industrial Design: Circuit design, engineering optimizations
Hill climbing remains a fundamental AI technique, widely used despite its limitations.
Through proper enhancements, it becomes a competitive algorithm for real-world problems
requiring fast, approximate solutions.
7
fi
fi
fi
fi
fi