Unit 1 - Problem Solving (Aiml)
Unit 1 - Problem Solving (Aiml)
D. ARAVIND GOSH,
MTECH,MBA,B.ED
ASSISTANT PROFESSOR
I. INTRODUCTION TO AI:
What is Artificial Intelligence?
Artificial Intelligence is the practice of transforming digital computers into
working robots (physical & non-physical) activities. They are designed in
such a way that they can perform any dedicated tasks and also take
decisions based on the provided inputs. The reason behind its hype around
the world today is its act of working and thinking like a human being.
1
II. AI APPLICATION:
2
way. This will not only make kids engage while learning but will also
ensure that they are catching the concepts and all thanks to AI for that.
3
and navigation use the convolutional and graph neural network of Artificial
Intelligence to provide these suggestions. Let’s take a closer look at AI
applications in GPS & Navigation.
Voice Assistance: This feature allows users to interact with the AI using a
hands-free feature & which allows them to drive seamlessly while
communicating through the navigation system.
Personalization (Intelligent Routing): The personalized system gets
active based on the user’s pattern & behavior of preferred routes.
Irrespective of the time & duration, the GPS will always provide
suggestions based on multiple patterns & analyses.
Traffic Prediction: AI uses a Linear Regression algorithm that helps in
preparing and analyzing the traffic data. This clearly helps an individual in
saving time and alternate routes are provided based on congestion ahead of
the user.
Positioning & Planning: GPS & Navigation requires enhance support of
AI for better positioning & planning to avoid unwanted traffic zones. To
help with this, AI-based techniques are being used such as Kalman, Sensor
fusion, etc. Besides this, AI also uses prediction methods to analyze the
fastest & efficient route to surface the real-time data.
5. Healthcare
4
Patient Monitoring: In case of any abnormal activity and alarming alerts
during the care of patients, an AI system is being used for early
intervention. Besides this, RPM, or Remote Patient Monitoring has been
significantly growing & is expected to go up by USD 6 Billion by 2025, to
treat and monitor patients.
Surgical Assistance: To ensure a streamlined procedure guided by the AI
algorithms, it helps surgeons to take effective decisions based on the
provided insights to make sure that no further risks are involved in this
while processing.
6. Agriculture
Artificial Intelligence is also becoming a part of agriculture and farmers’ life.
It is used to detect various parameters such as the amount of water and
moisture, amount of deficient nutrients, etc in the soil. There is also a
machine that uses AI to detect where the weeds are growing, where the soil is
infertile, etc. Let’s take a closer look at AI applications in Agriculture.
Stock Monitoring: To have rigorous monitoring, and ensure that crops that
not being affected by any disease, AI uses CN to check crop feeds live and
alarms when any abnormality arises.
Supply Chain: The AI algorithm helps in analyzing and preparing the
inventory to maintain the supply chain stock. Although it’s not new, for the
agriculture field, it does help farmers to ensure the demands are being met
with minimal loss.
Pest Management: AI algorithms can analyze data from multiple sources
to identify early warnings to their respective farmers. This technology also
enables less usage of harmful pesticides by offering the best resources for
pest management.
Forecasting: With the help of AI, analyzing the weather forecast and crop
growth has become more convenient in the field of agriculture and the
algorithms help farmers to grow crops with effective business decisions.
5
Moderation: Due to an increase in social media engagements, active
content moderation has become a key to controlling any disruption. Ai uses
algorithms to filter and moderate such content across different social media
platforms. It marks the flag and eliminates any such content that violates
the community guideline.
7. Gaming
Artificial Intelligence is really dominating the field of the gaming industry.
Artificial Intelligence is used to make a human-like simulation in gaming. This
enhances the gaming experience. Apart from that, AI is also used to design
games, predict human behavior, to make the game more realistic. Various
modern games use real-world simulation for gaming using AI. Let’s take a
closer look at AI applications in Gaming Sector.
Quality Assurance: Testing games & ensuring their performance gets
easier allows testers to perform rigorous testing in comparatively less time.
It empowers and fixes all the game mechanics and any other potential bugs
that can hinder performance.
Game Assistance: AI algorithms offers virtual assistance during gaming
sessions that include tips, tutorials, and other useful resources. This feature
help players to be in the game & understand the metrics during the whole
time session.
Animation: To make games more realistic, machine learning and artificial
intelligence algorithms are being used in today’s gaming industry.
Techniques such as Neural network empowers stimulation and facial
expressions for an immersive experience.
III. PROBLEM SOLVING AGENTS:
On the basis of the problem and their working domain, different types of problem-
solving agent defined and use at an atomic level without any internal state
visible with a problem-solving algorithm. The problem-solving agent performs
precisely by defining problems and several solutions. So we can say
6
that problem solving is a part of artificial intelligence that encompasses a
number of techniques such as a tree, B-tree, heuristic algorithms to solve a
problem.
There are basically three types of problem in artificial intelligence:
1. Ignorable: In which solution steps can be ignored.
2. Recoverable: In which solution steps can be undone.
3. Irrecoverable: Solution steps cannot be undo.
Steps problem-solving in AI: The problem of AI is directly associated with
the nature of humans and their activities. So we need a number of finite steps
to solve a problem which makes human easy works.
These are the following steps which require to solve a problem :
Initial State: This state requires an initial state for the problem which starts
the AI agent towards a specified goal. In this state new methods also
initialize problem domain solving by a specific class.
Action: This stage of problem formulation works with function with a
specific class taken from the initial state and all possible actions done in
this stage.
Transition: This stage of problem formulation integrates the actual action
done by the previous action stage and collects the final stage to forward it
to their next stage.
7
Goal test: This stage determines that the specified goal achieved by the
integrated transition model or not, whenever the goal achieves stop the
action and forward into the next stage to determines the cost to achieve the
goal.
Path costing: This component of problem-solving numerical assigned what
will be the cost to achieve the goal. It requires all hardware software and
human working cost.
IV. SEARCH ALGORITHMS:
Artificial Intelligence is the study of building agents that act rationally. Most of
the time, these agents perform some kind of search algorithm in the background
in order to achieve their tasks.
A search problem consists of:
A State Space. Set of all possible states where you can be.
A Start State. The state from where the search begins.
A Goal State. A function that looks at the current state returns
whether or not it is the goal state.
The Solution to a search problem is a sequence of actions, called
the plan that transforms the start state to the goal state.
This plan is achieved through search algorithms.
8
Uninformed Search Algorithms:
Example:
Question. Which solution would DFS find to move from node S to node G if run on the graph below?
Solution. The equivalent search tree for the above graph is as follows. As DFS
traverses the tree “deepest node first”, it would always pick the deeper branch
9
until it reaches the solution (or it runs out of nodes, and goes to the next
branch). The traversal is shown in blue arrows.
Solution. The equivalent search tree for the above graph is as follows. As BFS
traverses the tree “shallowest node first”, it would always pick the shallower
branch until it reaches the solution (or it runs out of nodes, and goes to the
next branch). The traversal is shown in blue arrows.
10
Path: S -> D -> G
11
Path: S -> A -> B -> C -> G
UCS is different from BFS and DFS because here the costs come into play. In
other words, traversing via different edges might not have the same cost. The
goal is to find a path where the cumulative sum of costs is the least.
cost(root) = 0
12
Example:
Question. Which solution would UCS find to move from node S to node G if
run on the graph below?
Solution. The equivalent search tree for the above graph is as follows. The
cost of each node is the cumulative cost of reaching that node from the root.
Based on the UCS strategy, the path with the least cumulative cost is chosen.
Note that due to the many options in the fringe, the algorithm explores most of
them so long as their cost is low, and discards them when a lower-cost path is
found; these discarded traversals are not shown below. The actual traversal is
shown in blue.
13
Path: S -> A -> B -> G
Cost: 5
Here, the algorithms have information on the goal state, which helps in more
efficient searching. This information is obtained by something called
a heuristic.
In this section, we will discuss the following search algorithms.
1. Greedy Search
2. A* Tree Search
3. A* Graph Search
14
Greedy Search:
In greedy search, we expand the node closest to the goal node. The “closeness”
is estimated by a heuristic h(x).
Strategy: Expand the node closest to the goal state, i.e. expand the node with a lower
h value.
Example:
Question. Find the path from S to G using greedy search. The heuristic
values h of each node below the name of the node.
15
Path: S -> D -> E -> G
A* Tree Search:
Here, h(x) is called the forward cost and is an estimate of the distance of
the current node from the goal node.
And, g(x) is called the backward cost and is the cumulative cost of a node
from the root node.
A* search is optimal only when for all nodes, the forward cost for a node
h(x) underestimates the actual cost h*(x) to reach the goal. This property
of A* heuristic is called admissibility.
Admissibility:
Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in
the fringe at each step, choosing the node with the lowest sum. The entire
work is shown in the table below.
Note that in the fourth set of iterations, we get two paths with equal summed
cost f(x), so we expand them both in the next set. The path with a lower cost
on further expansion is the chosen path.
17
Path h(x) g(x) f(x)
S 7 0 7
S -> A 9 3 12
S -> D 5 2 7
18
A* Graph Search:
A* tree search works well, except that it takes time re-exploring the
branches it has already explored. In other words, if the same node has
expanded twice in different branches of the search tree, A* search might
explore both of those branches, thus wasting time
A* Graph Search, or simply Graph Search, removes this limitation by
adding this rule: do not expand the same node more than once.
Heuristic. Graph search is optimal only when the forward cost between
two successive nodes A and B, given by h(A) – h (B), is less than or equal
to the backward cost between those two nodes g(A -> B). This property of
the graph search heuristic is called consistency.
Consistency:
Example:
Question. Use graph searches to find paths from S to G in the
following graph.
19
Solution.
We solve this question pretty much the same way we solved last question, but
in this case, we keep a track of nodes explored so that we don’t re-explore
them.
20
V. UNINFORMED SEARCH STRATEGIES
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
1. Breadth-first Search:
21
Advantages:
Disadvantages:
o It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
o BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse
in layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
22
Time Complexity: Time Complexity of BFS algorithm can be obtained by
the number of nodes traversed in BFS until the shallowest Node. Where
the d= depth of shallowest solution and b is a node at every state.
2. Depth-first Search
Advantage:
o DFS requires very less memory as it only needs to store a stack of the
nodes on the path from root node to the current node.
23
o It takes less time to reach to the goal node than BFS algorithm (if it
traverses in the right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to
the infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal
node is not found. After backtracking it will traverse node C and then G, and
here it will terminate as it found goal node.
24
3. Depth-Limited Search Algorithm:
o Standard failure value: It indicates that problem does not have any
solution.
o Cutoff failure value: It defines no solution for the problem within a given
depth limit.
Advantages:
25
Disadvantages:
Example:
26
algorithm is implemented by the priority queue. It gives maximum priority to
the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm
if the path cost of all edges is the same.
Advantages:
o Uniform cost search is optimal because at every state the path with the
least cost is chosen.
Disadvantages:
o It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in
an infinite loop.
Example:
27
5. Iterative deepeningdepth-first Search:
This Search algorithm combines the benefits of Breadth-first search's fast search
and depth-first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.
Advantages:
o Itcombines the benefits of BFS and DFS search algorithm in terms of fast
search and memory efficiency.
Disadvantages:
o The main drawback of IDDFS is that it repeats all the work of the
previous phase.
Example:
28
6. Bidirectional Search Algorithm:
Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-
search, to find the goal node. Bidirectional search replaces one single search
graph with two small subgraphs in which one starts the search from an initial
vertex and other starts from goal vertex. The search stops when these two
graphs intersect each other.
Advantages:
Disadvantages:
29
o In bidirectional search, one should know the goal state in advance.
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in
the forward direction and starts from goal node 16 in the backward direction.
30
arrangement inside a sensible measure of time and memory space. This is a
sort of an alternate route as we regularly exchange one of optimality,
culmination, exactness, or accuracy for speed.
It may be a function which is employed in Informed Search, and it
finds the foremost promising path. It takes the present state of the agent
as its input and produces the estimation of how close agent is from the
goal
A Heuristic (or a heuristic capacity) investigates search calculations. At
each stretching step, it assesses the accessible data and settles on a choice
on which branch to follow. It does as such by positioning other options.
The Heuristic is any gadget that is frequently successful yet won’t ensure
work for each situation.
We need heuristics in order to create, in a sensible measure of time, an
answer that is sufficient for the issue being referred to. It doesn’t need to
be the best-an estimated arrangement will do since this is sufficiently
quick. Most issues are exponential.
Heuristic Search let us decrease this to a somewhat polynomial number.
We utilize this in AI since we can place it to use in circumstances where
we can’t discover known calculations.
Informed Search Algorithms have information on the target state which helps in
logically capable-looking. This information gathered as a limit that measures
how close a state is to the goal state.
Its significant bit of leeway is that it is proficiency is high and is equipped for
discovering arrangements in a shorter span than ignorant Search.
31
It contains an array of knowledge like how far we are from the goal, path cost,
how to reach the goal node, etc. This data help agents to explore less to the
search space and find more efficiently the goal node.
It is likewise nearly more affordable than an educated pursuit. It’s models
incorporate-
a. A*Search
A* search computation finds the briefest path through the chase space using the
heuristic limit. This chase count expands less interest trees and gives a perfect
result snappier.
Greedy best-first search algorithm always selects the trail which appears best at
that moment. Within the best first search algorithm, we expand the node which
is closest to the goal node and therefore the closest cost is estimated by heuristic
fun
32
This sort of search reliably picks the way which appears best by then. It is the
blend of BFS and DFS. It uses heuristic limit and searches. The BFS grants us
to take the advantages of the two estimations.
The plans to show up at the target state from the earliest starting point state
differentiate just by the solicitation and length of exercises.
a. Breadth-First Search
This count picks a singular center point (beginning or source point) in a diagram
and a while later visits all the centers neighboring the picked center.
At the point when the computation visits and means the starting center point, by
then it moves towards the nearest unvisited center points and assessments them.
33
Once visited, all center points are stepped. These accentuations continue until
all the center points of the graph have been viably visited and checked.
Some of the cons of Breadth-First Search include :
It eats up a lot of memory space. As every level of center points is saved for
making the following one.
Its flightiness depends upon the number of center points. It can check
duplicate center points.
b. Uniform Cost Search
Uniform-cost search expands nodes consistent with their path costs form the
basis node. It is often used to solve any graph/tree where the optimal cost is in
demand.
In spite of the way that it is vague from Breadth-First chase if each progress has
a comparative cost. It researches courses in the extending solicitation of cost.
c. Depth First Search
It relies upon the possibility of LIFO. As it speaks to Last In First Out. In like
manner, completed in recursion with LIFO stack data structure. Along these
lines, It used to make a vague course of action of centers from the Breadth-First
procedure, just in the differing demand.
As the way is been taken care of in each accentuation from root to leaf center
point. Subsequently, store centers are immediate with space requirements. With
extending factor b and significance as m, the additional room is bm.
34
In the unlikely event that the ideal cut-off is d, and in case the took cut-out
is lesser than d, by then this estimation may crash and burn.
If regardless, d is lesser than the fixed cut-off., by then execution time
increases.
Its multifaceted nature depends upon various ways. It can’t check duplicate
center points.
d. Iterative Deepening Depth First Search
e. Bidirectional Search
This as the name recommends, runs two different ways. It works with two who
glance through that run at the same time, beginning one from source excessively
objective and the other one from goal to source a retrogressive way.
The two inquiries should bargain the data structure. It depends on a guided outline
to find the most restricted route between the source(initial center) to the goal
center point.
The two missions will start from their individual spots and the estimation stops
when the two requests meet at a center. It is a speedier method and improves the
proportion of time required for exploring the graph.
This strategy is capable in the circumstance when the starting center point and
target center are stand-out and portrayed. Spreading factor is the equivalent for
the two.
35
Hill Climbing in AI
Hill Climbing is a kind of heuristic quest for logical progression issues in the
field of Artificial Intelligence. Given a set of data sources and a better than
average heuristic limit, it endeavors to find an adequate enough response for the
issue. This course of action may not be the overall perfect most noteworthy.
In the above definition, logical headway issues surmise that incline climbing
handles the issues where we need to grow or confine a given authentic limit by
picking regards from the given information sources.
For example, Model Travelling salesman issue where we need to constrain the
division passed by the salesperson.
‘Heuristic search‘ infers that this interest estimation may not find the perfect
response to the issue. In any case, it will give a not too bad game plan in a
reasonable time.
A heuristic limit is a limit that will rank all the potential decisions at any
growing advance in search of figuring subject to the available information. It
makes the estimation pick the best course out of courses.
Produce and Test variation: Hill Climbing is the variation of the Generate
and Test strategy. The Generate and Test technique produce input which
assists with choosing which bearing to move in the inquiry space.
Use of Greedy Approach: Hill-climbing calculation search moves toward
the path which improves the expense.
No backtracking: It doesn’t backtrack the pursuit space, as it doesn’t recall
the past states.
36
Types of Hill Climbing in AI
Simple Hill climbing is the least difficult approach to execute a slope climbing
calculation. It just assesses the neighbor hub state at once and chooses the first
which enhances current expense and sets it as a present state.
It just checks it’s one replacement state, and on the off chance that it discovers
superior to the present state, at that point move else be in a similar state.
Its features include:
Less tedious
Less ideal arrangement that isn’t ensured
b. Steepest Ascent Hill Climbing
Stochastic slope climbing doesn’t analyze for all its neighbors before moving. It
makes use of randomness as a part of the search process. It is also an area
search algorithm, meaning that it modifies one solution and searches the
relatively local area of the search space until the local optima is found .
37
This calculation chooses one neighbor hub aimlessly and concludes whether to
pick it as a present state or analyze another state.
Hill climbing, simulated annealing, tabu search, and genetic algorithms are
a few examples of different kinds of local search algorithms. Each of these
algorithms operates a little bit differently, but they all follow the same
fundamental procedure of iteratively creating new solutions and comparing
them to the existing solution to determine whether they are superior.
The local search algorithm in artificial intelligence is a crucial tool in the field
of artificial intelligence and is frequently employed to address a range of
optimization issues.
38
Local Search
Initialization:
The algorithm starts with an initial solution to the problem. This solution
can be generated randomly or using a heuristic.
Evaluation:
The quality of the initial solution is evaluated using an objective function.
The objective function measures how good the solution is, based on the
problem constraints and requirements.
Neighborhoodsearch:
The algorithm generates neighbouring solutions by making small
39
modifications to the current solution. These modifications can be random
or guided by heuristics.
Selection:
The neighboring solutions are evaluated using the objective function, and
the best solution is selected as the new current solution.
Termination:
The algorithm terminates when a stopping criterion is met. This criterion
can be a maximum number of iterations, a threshold value for the
objective function, or a time limit.
Solution:
The final solution is the best solution found during the search process.
40
Local search algorithms' main principle is, to begin with a first solution and
then move on to a better one in the neighborhood until no better solution is
found. There are several types of local search algorithms, some of which are
listed below:
Hill Climbing
The simplicity and effectiveness of hill climbing, which does not require the
creation and upkeep of a search tree, is one of its main benefits. Its main
drawback, however, is that it might get stuck in local optima, where it is
impossible to improve without first making a non-improving move. Numerous
modifications to the fundamental hill climbing algorithm, including simulated
annealing and tabu search, have been suggested to address this issue.
41
Local Beam Search
The basic steps of the local beam search algorithm are as follows:
The ability to explore multiple search paths simultaneously gives local beam
search an edge over traditional hill climbing, which can increase the likelihood
of discovering the global optimum. Even so, it can still find itself in local
optima, especially if the search space is big or complicated.
Simulated Annealing
42
The basic steps of the simulated annealing algorithm are as follows:
The main principle of the simulated annealing algorithm is to control the level
of randomness in the search process by altering the temperature parameter. High
temperatures enable the algorithm to explore new regions of the search space by
increasing its propensity to accept non-improving moves. The algorithm
becomes more selective and concentrates on improving the solution as the
temperature drops.
43
Due to their capacity to efficiently search sizable solution spaces, local search
algorithms in artificial intelligence are frequently used to solve the TSP. Local
search algorithms for the TSP work by starting with an initial solution and
incrementally improving it by making small changes to it one at a time until no
more advancements are possible.
The 2-opt local search algorithm, which connects the remaining nodes in a way
that minimizes the total distance, is a popular local search algorithm for the
TSP. It involves removing two edges from the current solution and replacing
them with two different edges. The algorithm keeps going until there is no more
room for improvement.
44
Both 2-opt and the Lin-Kernighan algorithm are examples of local search
algorithms that operate on a single solution at a time. However, to improve the
chances of finding the global optimum, various modifications to these
algorithms have been proposed, such as tabu search and simulated annealing.
These algorithms introduce additional mechanisms to escape from local optima
and explore new parts of the search space.
Adversarial Search
45
information about the game, and they can see each other moves also.
Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information
about the game and not aware with what's going on, such type of games
are called the game with imperfect information, such as tic-tac-toe,
Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which
follow a strict pattern and set of rules for the games, and there is no
randomness associated with them. Examples are chess, Checkers, Go, tic-
tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which
have various unpredictable events and has a factor of chance or luck. This
factor of chance or luck is introduced by either dice or cards. These are
random, and each action response is not fixed. Such games are also called
as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
Zero-Sum Game
46
Zero-sum game: Embedded thinking
The Zero-sum game involved embedded thinking in which one agent or player is
trying to figure out:
o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do
Each of the players is trying to find out the response of his opponent to their
actions. This requires embedded thinking or backward reasoning to solve the
game problems in AI.
Game tree:
47
A game tree is a tree where nodes of the tree are the game states and Edges of
the tree are the moves by players. Game tree involves initial state, actions
function, and result Function.
The following figure is showing part of the game-tree for tic-tac-toe game.
Following are some key points of the game:
48
Example Explanation:
o From the initial state, MAX has 9 possible moves as he starts first. MAX
place x and MIN place o, and both player plays alternatively until we
reach a leaf node where one player has three in a row or all squares are
filled.
o Both players will compute each node, minimax, the minimax value which
is the best achievable utility against an optimal adversary.
o Suppose both the players are well aware of the tic-tac-toe and playing the
best play. Each player is doing his best to prevent another one from
winning. MIN is acting against Max in the game.
o So in the game tree, we have a layer of Max, a layer of MIN, and each
layer is called as Ply. Max place x, then MIN puts o to prevent Max from
winning, and this game continues until the terminal node.
o In this either MIN wins, MAX wins, or it's a draw. This game-tree is the
whole search space of possibilities that MIN and MAX are playing tic-
tac-toe and taking turns alternately.
o It aims to find the optimal strategy for MAX to win the game.
o It follows the approach of Depth-first search.
o In the game tree, optimal leaf node could appear at any depth of the tree.
o Propagate the minimax values up to the tree until the terminal node
discovered.
49
IX. CONSTRAINT SATISFACTION PROBLEMS (CSP):
Finding a solution that meets a set of constraints is the goal of constraint
satisfaction problems (CSPs), a type of AI issue. Finding values for a group of
variables that fulfill a set of restrictions or rules is the aim of constraint
satisfaction problems. For tasks including resource allocation, planning,
scheduling, and decision-making, CSPs are frequently employed in AI.
50
Constraint Satisfaction Problems (CSP) representation:
The finite set of variables V1, V2, V3.....................................Vn.
Non-empty domain for every single variable D1, D2, D3..............................Dn.
The finite set of constraints C1, C2......................, Cm.
where each constraint Ci restricts the possible values for variables,
e.g., V1 ≠ V2
Each constraint Ci is a pair <scope, relation>
Example: <(V1, V2), V1 not equal to V2>
Scope = set of variables that participate in constraint.
Relation = list of valid variable value combinations.
There might be a clear list of permitted combinations.
Perhaps a relation that is abstract and that allows for
membership testing and listing.
Constraint Satisfaction Problems (CSP) algorithms:
The backtracking algorithm is a depth-first search algorithm that
methodically investigates the search space of potential solutions up until a
solution is discovered that satisfies all the restrictions. The method begins
by choosing a variable and giving it a value before repeatedly attempting to
give values to the other variables. The method returns to the prior variable
and tries a different value if at any time a variable cannot be given a value
that fulfills the requirements. Once all assignments have been tried or a
solution that satisfies all constraints has been discovered, the algorithm
ends.
The forward-checking algorithm is a variation of the backtracking
algorithm that condenses the search space using a type of local consistency.
For each unassigned variable, the method keeps a list of remaining values
and applies local constraints to eliminate inconsistent values from these
sets. The algorithm examines a variable’s neighbors after it is given a value
to see whether any of its remaining values become inconsistent and
51
removes them from the sets if they do. The algorithm goes backward if, after
forward checking, a variable has no more values.
Algorithms for propagating constraints are a class that uses local
consistency and inference to condense the search space. These algorithms
operate by propagating restrictions between variables and removing
inconsistent values from the variable domains using the information
obtained.
Implementations code for Constraint Satisfaction Problems (CSP):
class CSP:
self.variables = variables
self.domains = Domains
self.constraints = constraints
self.solution = None
def solve(self):
assignment = {}
self.solution = self.backtrack(assignment)
return self.solution
52
def backtrack(self, assignment):
if len(assignment) == len(self.variables):
return assignment
var = self.select_unassigned_variable(assignment)
assignment[var] = value
result = self.backtrack(assignment)
return result
del
assignment[var]
return None
not
[6, 0, 0, 1, 9, 5, 0, 0, 0],
[0, 9, 8, 0, 0, 0, 0, 6, 0],
[8, 0, 0, 0, 6, 0, 0, 0, 3],
[4, 0, 0, 8, 0, 3, 0, 0, 1],
[7, 0, 0, 0, 2, 0, 0, 0, 6],
[0, 6, 0, 0, 0, 0, 2, 8, 0],
[0, 0, 0, 4, 1, 9, 0, 0, 5],
54
[0, 0, 0, 0, 8, 0, 0, 0, 0]
def print_sudoku(puzzle):
for i in range(9):
if i % 3 == 0 and i != 0:
print("----------------------")
for j in range(9):
if j % 3 == 0 and j != 0:
print()
print_sudoku(puzzle)
55
Output:
5 3 0 | 0 7 0 | 0 0 0
6 0 0 | 1 9 5 | 0 0 0
0 9 8 | 0 0 0 | 0 6 0
- - - - - - - - - -
-
8 0 0 | 0 6 0 | 0 0 3
4 0 0 | 8 0 3 | 0 0 1
7 0 0 | 0 2 0 | 0 0 6
- - - - - - - - - -
-
0 6 0 | 0 0 0 | 2 8 0
0 0 0 | 4 1 9 | 0 0 5
0 0 0 | 0 8 0 | 0 0 0
56