Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
19 views56 pages

Unit 1 - Problem Solving (Aiml)

The document provides an overview of Artificial Intelligence (AI) and its applications across various fields such as e-commerce, education, robotics, healthcare, agriculture, and gaming. It discusses problem-solving agents, search algorithms, and the importance of heuristics in AI, detailing different types of search algorithms like Depth First Search, Breadth First Search, and Uniform Cost Search. Additionally, it emphasizes the role of AI in enhancing efficiency and decision-making processes in diverse industries.

Uploaded by

divakardrdiva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views56 pages

Unit 1 - Problem Solving (Aiml)

The document provides an overview of Artificial Intelligence (AI) and its applications across various fields such as e-commerce, education, robotics, healthcare, agriculture, and gaming. It discusses problem-solving agents, search algorithms, and the importance of heuristics in AI, detailing different types of search algorithms like Depth First Search, Breadth First Search, and Uniform Cost Search. Additionally, it emphasizes the role of AI in enhancing efficiency and decision-making processes in diverse industries.

Uploaded by

divakardrdiva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 56

SRI VENKATESWARAA COLLEGE OF TECHNOLOGY

SECOND YEAR IVth SEM

ARITIFICAL INTELLIGENCE AND MACHINE LERANING

D. ARAVIND GOSH,
MTECH,MBA,B.ED
ASSISTANT PROFESSOR

UNIT 1 – PROBLEM SOLVING

INTRODUCTION TO AI – AI APPLICATIONS – PROBLEM SOLVING


AGENTS - SEARCH ALGORITHMS – UNINFORMED SEARCH
STRATEGIES – HEUIRSTICS SEARCH STRATEGIES – LOCAL SEARCH
AND OPTIMIZATION PROBLEMS – ADVERSARIAL SERACH –
CONSTRAINT SATISFACTION PROBLEMS (CSP)

I. INTRODUCTION TO AI:
What is Artificial Intelligence?
Artificial Intelligence is the practice of transforming digital computers into
working robots (physical & non-physical) activities. They are designed in
such a way that they can perform any dedicated tasks and also take
decisions based on the provided inputs. The reason behind its hype around
the world today is its act of working and thinking like a human being.

Besides this, Artificial Intelligence is a branch of computer science that


was introduced with the idea to make things simpler and automotive which
humans can’t (in most cases). The algorithm fits in artificial intelligence to
learn from the provided data so that future predictions can be made for
effective business.

1
II. AI APPLICATION:

1. Artificial Intelligence in E-Commerce:


 Personalization: Using this feature, customers would be able to see those
products based on their interest pattern and that eventually will drive more
conversions.
 Enhanced Support: It’s very important to attend to every customer’s
query to reduce the churn ratio and to empower that AI-powered chatbots
are well capable of handling most of the queries that too 24×7
 Dynamic Pricing Structure: It’s a smart way of fluctuating the price of
any given product by analyzing data from different sources and based on
which price prediction is being done.
 Fake Review Detection: A report suggested that 9 out of 10 people tend to
go through customer reviews first before they actually place any order.
 Voice Search: With the introduction of this feature, many applications and
websites are using voice-over searches in their system. Today, 6 out of 10
prefer to use this feature for online shopping. In addition to this, alone in
the USA, the market growth has risen up to 400% in just 2 years, i.e. from
4.6 USD Billion to 20 USD Billion.
2. AI in Education Purpose:
 Voice Assistant: With the help of AI algorithms, this feature can be used in
multiple and broad ways to save time. provide convenience, and can assist
users as and when required.

 Gamification: This feature has enabled e-learning companies to design


attractive game modes into their system so that kids can learn in a super fun

2
way. This will not only make kids engage while learning but will also
ensure that they are catching the concepts and all thanks to AI for that.

 Smart Content Creation: AI uses algorithms to detect, predict and design


content & provide valuable insights based on the user’s interest which can
include videos, audio, infographics, etc. Following this, with the
introduction of AR/VR technologies, e-learning companies are likely to
start creating games (for learning), and video content for the best
experience.
3. Artificial Intelligence in Robotics
Artificial Intelligence is one of the major technologies that provide the
robotics field with a boost to increase their efficiency. AI provides robots to
make decisions in real time and increase productivity.
 NLP: Natural Language Processing plays a vital role in robotics to
interpret the command as a human being instructs. This enables AI
algorithms & techniques such as sentimental analysis, syntactic parsing,
etc.
 Object Recognition & Manipulation: This functionality enables robots to
detect objects within the perimeter and this technique also helps robots to
understand the size & shape of that particular object. Besides this, this
technique has two units, one is to identify the object & the other one refers
to the physical interaction with the object.
 HRI: With the help of AI algorithms, HRI or Human-Robotics Interaction
is being developed that helps in understanding human patterns such as
gestures, expressions, etc. This technique helps maximize the performance
of robots and ensures that it reaches and maintains its accuracy.
4. GPS and Navigations
GPS technology uses Artificial Intelligence to make the best route and provide
the best available route to the users for traveling. GPS

3
and navigation use the convolutional and graph neural network of Artificial
Intelligence to provide these suggestions. Let’s take a closer look at AI
applications in GPS & Navigation.
 Voice Assistance: This feature allows users to interact with the AI using a
hands-free feature & which allows them to drive seamlessly while
communicating through the navigation system.
 Personalization (Intelligent Routing): The personalized system gets
active based on the user’s pattern & behavior of preferred routes.
Irrespective of the time & duration, the GPS will always provide
suggestions based on multiple patterns & analyses.
 Traffic Prediction: AI uses a Linear Regression algorithm that helps in
preparing and analyzing the traffic data. This clearly helps an individual in
saving time and alternate routes are provided based on congestion ahead of
the user.
 Positioning & Planning: GPS & Navigation requires enhance support of
AI for better positioning & planning to avoid unwanted traffic zones. To
help with this, AI-based techniques are being used such as Kalman, Sensor
fusion, etc. Besides this, AI also uses prediction methods to analyze the
fastest & efficient route to surface the real-time data.
5. Healthcare

Artificial Intelligence is widely used in the field of healthcare and medicine.


The various algorithms of Artificial Intelligence are used to build precise
machines that are able to detect minor diseases inside the human body.
Telehealth: This feature enables doctors and healthcare experts to take close
monitoring while analyzing data to prevent any uncertain health issues.
Patients who are at high risk and require intensive care are likely to get
benefitted from this AI-powered feature.

4
 Patient Monitoring: In case of any abnormal activity and alarming alerts
during the care of patients, an AI system is being used for early
intervention. Besides this, RPM, or Remote Patient Monitoring has been
significantly growing & is expected to go up by USD 6 Billion by 2025, to
treat and monitor patients.
 Surgical Assistance: To ensure a streamlined procedure guided by the AI
algorithms, it helps surgeons to take effective decisions based on the
provided insights to make sure that no further risks are involved in this
while processing.
6. Agriculture
Artificial Intelligence is also becoming a part of agriculture and farmers’ life.
It is used to detect various parameters such as the amount of water and
moisture, amount of deficient nutrients, etc in the soil. There is also a
machine that uses AI to detect where the weeds are growing, where the soil is
infertile, etc. Let’s take a closer look at AI applications in Agriculture.
 Stock Monitoring: To have rigorous monitoring, and ensure that crops that
not being affected by any disease, AI uses CN to check crop feeds live and
alarms when any abnormality arises.
 Supply Chain: The AI algorithm helps in analyzing and preparing the
inventory to maintain the supply chain stock. Although it’s not new, for the
agriculture field, it does help farmers to ensure the demands are being met
with minimal loss.
 Pest Management: AI algorithms can analyze data from multiple sources
to identify early warnings to their respective farmers. This technology also
enables less usage of harmful pesticides by offering the best resources for
pest management.
 Forecasting: With the help of AI, analyzing the weather forecast and crop
growth has become more convenient in the field of agriculture and the
algorithms help farmers to grow crops with effective business decisions.

5
 Moderation: Due to an increase in social media engagements, active
content moderation has become a key to controlling any disruption. Ai uses
algorithms to filter and moderate such content across different social media
platforms. It marks the flag and eliminates any such content that violates
the community guideline.
7. Gaming
Artificial Intelligence is really dominating the field of the gaming industry.
Artificial Intelligence is used to make a human-like simulation in gaming. This
enhances the gaming experience. Apart from that, AI is also used to design
games, predict human behavior, to make the game more realistic. Various
modern games use real-world simulation for gaming using AI. Let’s take a
closer look at AI applications in Gaming Sector.
 Quality Assurance: Testing games & ensuring their performance gets
easier allows testers to perform rigorous testing in comparatively less time.
It empowers and fixes all the game mechanics and any other potential bugs
that can hinder performance.
 Game Assistance: AI algorithms offers virtual assistance during gaming
sessions that include tips, tutorials, and other useful resources. This feature
help players to be in the game & understand the metrics during the whole
time session.
 Animation: To make games more realistic, machine learning and artificial
intelligence algorithms are being used in today’s gaming industry.
Techniques such as Neural network empowers stimulation and facial
expressions for an immersive experience.
III. PROBLEM SOLVING AGENTS:
On the basis of the problem and their working domain, different types of problem-
solving agent defined and use at an atomic level without any internal state
visible with a problem-solving algorithm. The problem-solving agent performs
precisely by defining problems and several solutions. So we can say

6
that problem solving is a part of artificial intelligence that encompasses a
number of techniques such as a tree, B-tree, heuristic algorithms to solve a
problem.
There are basically three types of problem in artificial intelligence:
1. Ignorable: In which solution steps can be ignored.
2. Recoverable: In which solution steps can be undone.
3. Irrecoverable: Solution steps cannot be undo.
Steps problem-solving in AI: The problem of AI is directly associated with
the nature of humans and their activities. So we need a number of finite steps
to solve a problem which makes human easy works.
These are the following steps which require to solve a problem :

 Problem definition: Detailed specification of inputs and acceptable system


solutions.
 Problem analysis: Analyse the problem thoroughly.
 Knowledge Representation: collect detailed information about the
problem and define all possible techniques.
 Problem-solving: Selection of best techniques.
Components to formulate the associated problem:

 Initial State: This state requires an initial state for the problem which starts
the AI agent towards a specified goal. In this state new methods also
initialize problem domain solving by a specific class.
 Action: This stage of problem formulation works with function with a
specific class taken from the initial state and all possible actions done in
this stage.
 Transition: This stage of problem formulation integrates the actual action
done by the previous action stage and collects the final stage to forward it
to their next stage.

7
 Goal test: This stage determines that the specified goal achieved by the
integrated transition model or not, whenever the goal achieves stop the
action and forward into the next stage to determines the cost to achieve the
goal.
 Path costing: This component of problem-solving numerical assigned what
will be the cost to achieve the goal. It requires all hardware software and
human working cost.
IV. SEARCH ALGORITHMS:
Artificial Intelligence is the study of building agents that act rationally. Most of
the time, these agents perform some kind of search algorithm in the background
in order to achieve their tasks.
 A search problem consists of:

 A State Space. Set of all possible states where you can be.
 A Start State. The state from where the search begins.
 A Goal State. A function that looks at the current state returns
whether or not it is the goal state.
 The Solution to a search problem is a sequence of actions, called
the plan that transforms the start state to the goal state.
 This plan is achieved through search algorithms.

8
Uninformed Search Algorithms:

The search algorithms in this section have no additional information on the


goal node other than the one provided in the problem definition. The plans to
reach the goal state from the start state differ only by the order and/or length of
actions. Uninformed search is also called Blind search.
The following uninformed search algorithms are discussed in this section.
1. Depth First Search
2. Breadth First Search
3. Uniform Cost Search
Depth First Search:
Depth-first search (DFS) is an algorithm for traversing or searching tree or
graph data structures. The algorithm starts at the root node (selecting some
arbitrary node as the root node in the case of a graph) and explores as far as
possible along each branch before backtracking. It uses last in- first-out
strategy and hence it is implemented using a stack.

Example:
Question. Which solution would DFS find to move from node S to node G if run on the graph below?

Solution. The equivalent search tree for the above graph is as follows. As DFS
traverses the tree “deepest node first”, it would always pick the deeper branch

9
until it reaches the solution (or it runs out of nodes, and goes to the next
branch). The traversal is shown in blue arrows.

Path: S -> A -> B -> C -> G

Breadth First Search:


Breadth-first search (BFS) is an algorithm for traversing or searching tree or
graph data structures. It starts at the tree root (or some arbitrary node of a
graph, sometimes referred to as a ‘search key’), and explores all of the
neighbor nodes at the present depth prior to moving on to the nodes at the next
depth level. It is implemented using a queue.

Solution. The equivalent search tree for the above graph is as follows. As BFS
traverses the tree “shallowest node first”, it would always pick the shallower
branch until it reaches the solution (or it runs out of nodes, and goes to the
next branch). The traversal is shown in blue arrows.

10
Path: S -> D -> G

11
Path: S -> A -> B -> C -> G

Uniform Cost Search:

UCS is different from BFS and DFS because here the costs come into play. In
other words, traversing via different edges might not have the same cost. The
goal is to find a path where the cumulative sum of costs is the least.

Cost of a node is defined as:


cost(node) = cumulative cost of all nodes from root

cost(root) = 0

12
Example:
Question. Which solution would UCS find to move from node S to node G if
run on the graph below?

Solution. The equivalent search tree for the above graph is as follows. The
cost of each node is the cumulative cost of reaching that node from the root.
Based on the UCS strategy, the path with the least cumulative cost is chosen.
Note that due to the many options in the fringe, the algorithm explores most of
them so long as their cost is low, and discards them when a lower-cost path is
found; these discarded traversals are not shown below. The actual traversal is
shown in blue.

13
Path: S -> A -> B -> G
Cost: 5

Informed Search Algorithms:

Here, the algorithms have information on the goal state, which helps in more
efficient searching. This information is obtained by something called
a heuristic.
In this section, we will discuss the following search algorithms.
1. Greedy Search
2. A* Tree Search
3. A* Graph Search

14
Greedy Search:

In greedy search, we expand the node closest to the goal node. The “closeness”
is estimated by a heuristic h(x).

Heuristic: A heuristic h is defined as-


h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.

Strategy: Expand the node closest to the goal state, i.e. expand the node with a lower
h value.

Example:
Question. Find the path from S to G using greedy search. The heuristic
values h of each node below the name of the node.

Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it


has the lower heuristic cost. Now from D, we can move to B(h=4) or E(h=3).
We choose E with a lower heuristic cost. Finally, from E, we go to G(h=0). This
entire traversal is shown in the search tree below, in blue.

15
Path: S -> D -> E -> G

A* Tree Search:

 Here, h(x) is called the forward cost and is an estimate of the distance of
the current node from the goal node.
 And, g(x) is called the backward cost and is the cumulative cost of a node
from the root node.
 A* search is optimal only when for all nodes, the forward cost for a node
h(x) underestimates the actual cost h*(x) to reach the goal. This property
of A* heuristic is called admissibility.

Admissibility:

Strategy: Choose the node with the lowest f(x) value.


16
Example:
Find the path to reach from S to G using A* search.

Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in
the fringe at each step, choosing the node with the lowest sum. The entire
work is shown in the table below.

Note that in the fourth set of iterations, we get two paths with equal summed
cost f(x), so we expand them both in the next set. The path with a lower cost
on further expansion is the chosen path.

17
Path h(x) g(x) f(x)

S 7 0 7

S -> A 9 3 12

S -> D 5 2 7

S -> D -> B 4 2+1=3 7

S -> D -> E 3 2+4=6 9

S -> D -> B -> C 2 3+2=5 7

S -> D -> B -> E 3 3+1=4 7

S -> D -> B -> C -> G 0 5+4=9 9

S -> D -> B -> E -> G 0 4+3=7 7

Path: S -> D -> B -> E -> G


Cost: 7

18
A* Graph Search:
 A* tree search works well, except that it takes time re-exploring the
branches it has already explored. In other words, if the same node has
expanded twice in different branches of the search tree, A* search might
explore both of those branches, thus wasting time
 A* Graph Search, or simply Graph Search, removes this limitation by
adding this rule: do not expand the same node more than once.
 Heuristic. Graph search is optimal only when the forward cost between
two successive nodes A and B, given by h(A) – h (B), is less than or equal
to the backward cost between those two nodes g(A -> B). This property of
the graph search heuristic is called consistency.

Consistency:

Example:
Question. Use graph searches to find paths from S to G in the
following graph.

19
Solution.
We solve this question pretty much the same way we solved last question, but
in this case, we keep a track of nodes explored so that we don’t re-explore
them.

Path: S -> D -> B -> E -> G


Cost: 7

20
V. UNINFORMED SEARCH STRATEGIES

Uninformed Search Strategies

Uninformed search is a class of general-purpose search algorithms which


operates in brute force-way. Uninformed search algorithms do not have
additional information about state or search space other than how to traverse the
tree, so it is also called blind search.

Following are the various types of uninformed search strategies:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

1. Breadth-first Search:

o Breadth-first search is the most common search strategy for traversing a


tree or graph. This algorithm searches breadthwise in a tree or graph, so it
is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands
all successor node at the current level before moving to nodes of next
level.
o The breadth-first search algorithm is an example of a general-graph
search algorithm.
o Breadth-first search implemented using FIFO queue data structure.

21
Advantages:

o BFS will provide a solution if any solution exists.


o If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.

Disadvantages:

o It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
o BFS needs lots of time if the solution is far away from the root node.

Example:

In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse
in layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:

1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

22
Time Complexity: Time Complexity of BFS algorithm can be obtained by
the number of nodes traversed in BFS until the shallowest Node. Where
the d= depth of shallowest solution and b is a node at every state.

2. Depth-first Search

o Depth-first search isa recursive algorithm for traversing a tree or graph


data structure.
o It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next
path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.

Advantage:

o DFS requires very less memory as it only needs to store a stack of the
nodes on the path from root node to the current node.

23
o It takes less time to reach to the goal node than BFS algorithm (if it
traverses in the right path).

Disadvantage:

o There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to
the infinite loop.

Example:

In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:

Root node--->Left node--------> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal
node is not found. After backtracking it will traverse node C and then G, and
here it will terminate as it found goal node.

24
3. Depth-Limited Search Algorithm:

A depth-limited search algorithm is similar to depth-first search with a


predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will
treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any
solution.
o Cutoff failure value: It defines no solution for the problem within a given
depth limit.

Advantages:

Depth-limited search is Memory efficient.

25
Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o It may not be optimal if the problem has more than one solution.

Example:

4. Uniform-cost Search Algorithm:

Uniform-cost search is a searching algorithm used for traversing a weighted tree


or graph. This algorithm comes into play when a different cost is available for
each edge. The primary goal of the uniform-cost search is to find a path to the
goal node which has the lowest cumulative cost. Uniform-cost search expands
nodes according to their path costs form the root node. It can be used to solve
any graph/tree where the optimal cost is in demand. A uniform-cost search

26
algorithm is implemented by the priority queue. It gives maximum priority to
the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm
if the path cost of all edges is the same.

Advantages:

o Uniform cost search is optimal because at every state the path with the
least cost is chosen.

Disadvantages:

o It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in
an infinite loop.

Example:

27
5. Iterative deepeningdepth-first Search:

The iterative deepening algorithm is a combination of DFS and BFS algorithms.


This search algorithm finds out the best depth limit and does it by gradually
increasing the limit until a goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it


keeps increasing the depth limit after each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search
and depth-first search's memory efficiency.

The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.

Advantages:

o Itcombines the benefits of BFS and DFS search algorithm in terms of fast
search and memory efficiency.

Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the
previous phase.

Example:

Following tree structure is showing the iterative deepening depth-first search.


IDDFS algorithm performs various iterations until it does not find the goal
node. The iteration performed by the algorithm is given as:

28
6. Bidirectional Search Algorithm:

Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-
search, to find the goal node. Bidirectional search replaces one single search
graph with two small subgraphs in which one starts the search from an initial
vertex and other starts from goal vertex. The search stops when these two
graphs intersect each other.

Advantages:

o Bidirectional search is fast.


o Bidirectional search requires less memory

Disadvantages:

o Implementation of the bidirectional search tree is difficult.

29
o In bidirectional search, one should know the goal state in advance.

Example:

In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in
the forward direction and starts from goal node 16 in the backward direction.

The algorithm terminates at node 9 where two searches meet.

VI. HEUIRSTICS SEARCH STRATEGIES


 Heuristic search is defined as a procedure of search that endeavors to
upgrade an issue by iteratively improving the arrangement dependent on a
given heuristic capacity or a cost measure.
 This technique doesn’t generally ensure to locate an ideal or the best
arrangement, however, it may rather locate a decent or worthy

30
arrangement inside a sensible measure of time and memory space. This is a
sort of an alternate route as we regularly exchange one of optimality,
culmination, exactness, or accuracy for speed.
 It may be a function which is employed in Informed Search, and it
finds the foremost promising path. It takes the present state of the agent
as its input and produces the estimation of how close agent is from the
goal
 A Heuristic (or a heuristic capacity) investigates search calculations. At
each stretching step, it assesses the accessible data and settles on a choice
on which branch to follow. It does as such by positioning other options.
The Heuristic is any gadget that is frequently successful yet won’t ensure
work for each situation.
 We need heuristics in order to create, in a sensible measure of time, an
answer that is sufficient for the issue being referred to. It doesn’t need to
be the best-an estimated arrangement will do since this is sufficiently
quick. Most issues are exponential.
 Heuristic Search let us decrease this to a somewhat polynomial number.
We utilize this in AI since we can place it to use in circumstances where
we can’t discover known calculations.

Techniques in Heuristic Search

1. Direct Heuristic Search(Informed Search)

Informed Search Algorithms have information on the target state which helps in
logically capable-looking. This information gathered as a limit that measures
how close a state is to the goal state.

Its significant bit of leeway is that it is proficiency is high and is equipped for
discovering arrangements in a shorter span than ignorant Search.

31
It contains an array of knowledge like how far we are from the goal, path cost,
how to reach the goal node, etc. This data help agents to explore less to the
search space and find more efficiently the goal node.
It is likewise nearly more affordable than an educated pursuit. It’s models
incorporate-

a. A*Search

A* search is the most consistently known kind of best-first interest. It uses


heuristic limit h(n), and cost to show up at the center point n from the earliest
starting point state g(n). It has solidified features of UCS and insatiable best-
first request, by which it deal with the issue capably.

A* search computation finds the briefest path through the chase space using the
heuristic limit. This chase count expands less interest trees and gives a perfect
result snappier.

A* count resembles UCS beside that it uses g(n)+h(n) instead of g(n). It is


formulated with weighted graphs, which suggests it can find the simplest path
involving the littlest cost in terms of distance and time.
This makes A* algorithm in AI an informed search algorithm for best-first
search.
b. Greedy Best First Search

Greedy best-first search algorithm always selects the trail which appears best at
that moment. Within the best first search algorithm, we expand the node which
is closest to the goal node and therefore the closest cost is estimated by heuristic
fun

32
This sort of search reliably picks the way which appears best by then. It is the
blend of BFS and DFS. It uses heuristic limit and searches. The BFS grants us
to take the advantages of the two estimations.

2. Weak Heuristic Search (Uninformed Search)

Uninformed Search Algorithms have no additional information on the target


center point other than the one gave in the troublesome definition, so it’s also
called blind search.

The plans to show up at the target state from the earliest starting point state
differentiate just by the solicitation and length of exercises.

The uninformed search may be a class of general-purpose search algorithms


which operates in brute force-way. It is more unpredictable to actualize than an
educated pursuit as there is no usage of information in the ignorant inquiry.
Instances of Uninformed Search are-

a. Breadth-First Search

BFS is an approach in Heuristic Search that is used to diagram data or glancing


through the tree or intersection structures. The estimation profitably visits and
means all the key centers in a graph in an exact breadthwise structure.

This count picks a singular center point (beginning or source point) in a diagram
and a while later visits all the centers neighboring the picked center.

Remember, BFS gets to these centers separately.

At the point when the computation visits and means the starting center point, by
then it moves towards the nearest unvisited center points and assessments them.

33
Once visited, all center points are stepped. These accentuations continue until
all the center points of the graph have been viably visited and checked.
Some of the cons of Breadth-First Search include :

 It eats up a lot of memory space. As every level of center points is saved for
making the following one.
 Its flightiness depends upon the number of center points. It can check
duplicate center points.
b. Uniform Cost Search

Basically, it performs masterminding in growing the expense of the path to a


center point. Furthermore, it reliably develops the least cost center point.

Uniform-cost search expands nodes consistent with their path costs form the
basis node. It is often used to solve any graph/tree where the optimal cost is in
demand.

In spite of the way that it is vague from Breadth-First chase if each progress has
a comparative cost. It researches courses in the extending solicitation of cost.
c. Depth First Search

It relies upon the possibility of LIFO. As it speaks to Last In First Out. In like
manner, completed in recursion with LIFO stack data structure. Along these
lines, It used to make a vague course of action of centers from the Breadth-First
procedure, just in the differing demand.
As the way is been taken care of in each accentuation from root to leaf center
point. Subsequently, store centers are immediate with space requirements. With
extending factor b and significance as m, the additional room is bm.

Drawbacks of Depth First Search


 As the estimation may not end and go on unimaginably in one manner.
From now on, a response to this issue is to take a cut-out significance.

34
 In the unlikely event that the ideal cut-off is d, and in case the took cut-out
is lesser than d, by then this estimation may crash and burn.
 If regardless, d is lesser than the fixed cut-off., by then execution time
increases.
 Its multifaceted nature depends upon various ways. It can’t check duplicate
center points.
d. Iterative Deepening Depth First Search

Iterative Deepening Depth First Search (IDDFS) is a strategy wherein cycles of


DFS are run persistently with growing cutoff points until we locate the target.
IDDFS is perfect like BFS, yet uses generously less memory.
At each accentuation, it visits the centers in the request tree in a comparable
solicitation as significance first chase, be that as it may, the total solicitation
wherein center points are first visited is enough breadth first.

e. Bidirectional Search

This as the name recommends, runs two different ways. It works with two who
glance through that run at the same time, beginning one from source excessively
objective and the other one from goal to source a retrogressive way.
The two inquiries should bargain the data structure. It depends on a guided outline
to find the most restricted route between the source(initial center) to the goal
center point.

The two missions will start from their individual spots and the estimation stops
when the two requests meet at a center. It is a speedier method and improves the
proportion of time required for exploring the graph.

This strategy is capable in the circumstance when the starting center point and
target center are stand-out and portrayed. Spreading factor is the equivalent for
the two.

35
Hill Climbing in AI

Hill Climbing is a kind of heuristic quest for logical progression issues in the
field of Artificial Intelligence. Given a set of data sources and a better than
average heuristic limit, it endeavors to find an adequate enough response for the
issue. This course of action may not be the overall perfect most noteworthy.

In the above definition, logical headway issues surmise that incline climbing
handles the issues where we need to grow or confine a given authentic limit by
picking regards from the given information sources.

For example, Model Travelling salesman issue where we need to constrain the
division passed by the salesperson.

‘Heuristic search‘ infers that this interest estimation may not find the perfect
response to the issue. In any case, it will give a not too bad game plan in a
reasonable time.
A heuristic limit is a limit that will rank all the potential decisions at any
growing advance in search of figuring subject to the available information. It
makes the estimation pick the best course out of courses.

Features of Hill Climbing

 Produce and Test variation: Hill Climbing is the variation of the Generate
and Test strategy. The Generate and Test technique produce input which
assists with choosing which bearing to move in the inquiry space.
 Use of Greedy Approach: Hill-climbing calculation search moves toward
the path which improves the expense.
 No backtracking: It doesn’t backtrack the pursuit space, as it doesn’t recall
the past states.

36
Types of Hill Climbing in AI

a. Simple Hill Climbing

Simple Hill climbing is the least difficult approach to execute a slope climbing
calculation. It just assesses the neighbor hub state at once and chooses the first
which enhances current expense and sets it as a present state.

It just checks it’s one replacement state, and on the off chance that it discovers
superior to the present state, at that point move else be in a similar state.
Its features include:

 Less tedious
 Less ideal arrangement that isn’t ensured
b. Steepest Ascent Hill Climbing

The steepest-Ascent calculation is a variety of basic slope climbing calculations. It


first examines all the neighboring nodes then selects the node closest to the
answer state as of next node.
This calculation looks at all the neighboring hubs of the present state and chooses
one neighbor hub which is nearest to the objective state.

This calculation expends additional time as it looks for different neighbors

c. Stochastic Hill Climbing

Stochastic slope climbing doesn’t analyze for all its neighbors before moving. It
makes use of randomness as a part of the search process. It is also an area
search algorithm, meaning that it modifies one solution and searches the
relatively local area of the search space until the local optima is found .

This suggests that it’s appropriate on unimodal optimization problems or to be


used after the appliance of a worldwide optimization algorithm.

37
This calculation chooses one neighbor hub aimlessly and concludes whether to
pick it as a present state or analyze another state.

VII. LOCAL SEARCH AND OPTIMIZATION PROBLEMS


A local search algorithm in artificial intelligence works by starting with an
initial solution and then making minor adjustments to it in the hopes of
discovering a better one. Every time the algorithm iterates, the current solution
is assessed, and a small modification to the current solution creates a new
solution. The current solution is then compared to the new one, and if the new
one is superior, it replaces the old one. This process keeps going until a
satisfactory answer is discovered or a predetermined stopping criterion is
satisfied.

Hill climbing, simulated annealing, tabu search, and genetic algorithms are
a few examples of different kinds of local search algorithms. Each of these
algorithms operates a little bit differently, but they all follow the same
fundamental procedure of iteratively creating new solutions and comparing
them to the existing solution to determine whether they are superior.

The local search algorithm in artificial intelligence is a crucial tool in the field
of artificial intelligence and is frequently employed to address a range of
optimization issues.

Applications for local search algorithms include scheduling, routing, and


resource allocation. They are particularly helpful for issues where the search
space is very large and can be used to solve both discrete and continuous
optimization problems.

38
Local Search

The nodes are expanded systematically by informed and uninformed searches in


different ways:

 storing various routes in memory and


 choosing the most appropriate route,

Hence, a solution state is needed to get to the goal node. Beyond


these "classical search algorithms," however, there are some "local search
algorithms" that only consider the solution state required to reach the target
node and disregard path cost.

In contrast to multiple paths, a local search algorithm completes its task by


traversing a single current node and generally following that node's neighbors.

Working on a Local Search Algorithm

Local search algorithms are a type of optimization algorithm that iteratively


improves the solution to a problem by making small, local changes to it. Here
are the general steps of a local search algorithm:

 Initialization:
The algorithm starts with an initial solution to the problem. This solution
can be generated randomly or using a heuristic.
 Evaluation:
The quality of the initial solution is evaluated using an objective function.
The objective function measures how good the solution is, based on the
problem constraints and requirements.
 Neighborhoodsearch:
The algorithm generates neighbouring solutions by making small

39
modifications to the current solution. These modifications can be random
or guided by heuristics.
 Selection:
The neighboring solutions are evaluated using the objective function, and
the best solution is selected as the new current solution.
 Termination:
The algorithm terminates when a stopping criterion is met. This criterion
can be a maximum number of iterations, a threshold value for the
objective function, or a time limit.
 Solution:
The final solution is the best solution found during the search process.

Local search algorithms are frequently employed in situations where it is


computationally impractical or impossible to find an exact solution.
The traveling salesman problem, the knapsack issue, and the graph coloring
issue are a few examples of these issues.

Local Search Algorithms

A class of optimization algorithms known as local search algorithms in artificial


intelligence iteratively improves a given solution by looking through its nearby
solutions.

40
Local search algorithms' main principle is, to begin with a first solution and
then move on to a better one in the neighborhood until no better solution is
found. There are several types of local search algorithms, some of which are
listed below:

Hill Climbing

Hill climbing is a local search algorithm in artificial intelligence applied to


optimization and artificial intelligence issues. It is a heuristic search algorithm
that starts with an initial solution and iteratively enhances it by making small
adjustments to it, one at a time, and choosing the best adjustment that enhances
the solution the most.

The basic steps of the hill climbing algorithm are as follows:

1. Start with an initial solution.


2. Evaluate the objective function to determine the quality of the solution.
3. Generate a set of candidate solutions by making small modifications to
the current solution.
4. Evaluate the objective function for each candidate solution.
5. Select the candidate solution that improves the objective function the
most.
6. Repeat steps 3-5 until no further improvement can be made.

The simplicity and effectiveness of hill climbing, which does not require the
creation and upkeep of a search tree, is one of its main benefits. Its main
drawback, however, is that it might get stuck in local optima, where it is
impossible to improve without first making a non-improving move. Numerous
modifications to the fundamental hill climbing algorithm, including simulated
annealing and tabu search, have been suggested to address this issue.

41
Local Beam Search

A heuristic search algorithm called local beam search is applied to


optimization and artificial intelligence issues. It is a modification of the standard
hill climbing algorithm in which the current states are the starting set (or
"beam") of k solutions rather than a single solution.

The basic steps of the local beam search algorithm are as follows:

1. Start with k randomly generated solutions.


2. Evaluate the objective function for each solution.
3. Generate a set of candidate solutions by making small modifications to
the current states.
4. Evaluate the objective function for each candidate solution.
5. Select the k best candidate solutions to become the new current states.
6. Repeat steps 3-5 until a stopping criterion is met.

The ability to explore multiple search paths simultaneously gives local beam
search an edge over traditional hill climbing, which can increase the likelihood
of discovering the global optimum. Even so, it can still find itself in local
optima, especially if the search space is big or complicated.

Simulated Annealing

Simulated Annealing is a heuristic search algorithm applied to optimization


and artificial intelligence issues. By allowing the algorithm to occasionally
accept moves that do not improve, this variation of the hill climbing algorithm
can avoid the issue of getting stuck in local optima.

42
The basic steps of the simulated annealing algorithm are as follows:

1. Start with an initial solution.


2. Set the initial temperature to a high value.
3. Repeat the following steps until the stopping criterion is met:
o Generate a new solution by making a small modification to the
current solution.
o Evaluate the objective function of the new solution.
o If the new solution improves the objective function, accept it as the
new current solution.
o If the new solution does not improve the objective function, accept
it with a probability that depends on the difference between the
objective function values of the current and new solutions and the
current temperature.
o Decrease the temperature according to a cooling schedule.
4. Return the current solution as the final solution.

The main principle of the simulated annealing algorithm is to control the level
of randomness in the search process by altering the temperature parameter. High
temperatures enable the algorithm to explore new regions of the search space by
increasing its propensity to accept non-improving moves. The algorithm
becomes more selective and concentrates on improving the solution as the
temperature drops.

Travelling Salesman Problem

The Traveling Salesman Problem (TSP) is a well-known example of a


combinatorial optimization problem in which the goal is to determine the
quickest path between the starting city and a given set of cities. No known
algorithm can complete it in polynomial time because it is an NP-hard problem.

43
Due to their capacity to efficiently search sizable solution spaces, local search
algorithms in artificial intelligence are frequently used to solve the TSP. Local
search algorithms for the TSP work by starting with an initial solution and
incrementally improving it by making small changes to it one at a time until no
more advancements are possible.

The 2-opt local search algorithm, which connects the remaining nodes in a way
that minimizes the total distance, is a popular local search algorithm for the
TSP. It involves removing two edges from the current solution and replacing
them with two different edges. The algorithm keeps going until there is no more
room for improvement.

The Lin-Kernighan algorithm, which entails a series of 2-opt moves guided by a


collection of heuristics based on edge exchange and node merging, is another
well-liked local search algorithm for the TSP. Although this algorithm is more
complicated than 2-opt, it can produce superior outcomes.

44
Both 2-opt and the Lin-Kernighan algorithm are examples of local search
algorithms that operate on a single solution at a time. However, to improve the
chances of finding the global optimum, various modifications to these
algorithms have been proposed, such as tabu search and simulated annealing.
These algorithms introduce additional mechanisms to escape from local optima
and explore new parts of the search space.

VIII. ADVERSARIAL SERACH:

Adversarial Search

Adversarial search is a search, where we examine the problem which arises


when we try to plan ahead of the world and other agents are planning against us.

o The environment with more than one agent is termed as multi-agent


environment, in which each agent is an opponent of other agent and
playing against each other. Each agent needs to consider the action of
other agent and effect of that action on their performance.
o So, Searches in which two or more players with conflicting goals are
trying to explore the same search space for the solution, are called
adversarial searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation
function, and these are the two main factors which help to model and
solve games in AI.

Types of Games in AI:

o Perfect information: A game with the perfect information is that in


which agents can look into the complete board. Agents have all the

45
information about the game, and they can see each other moves also.
Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information
about the game and not aware with what's going on, such type of games
are called the game with imperfect information, such as tic-tac-toe,
Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which
follow a strict pattern and set of rules for the games, and there is no
randomness associated with them. Examples are chess, Checkers, Go, tic-
tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which
have various unpredictable events and has a factor of chance or luck. This
factor of chance or luck is introduced by either dice or cards. These are
random, and each action response is not fixed. Such games are also called
as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.

Zero-Sum Game

o Zero-sum games are adversarial search which involves pure competition.


o In Zero-sum game each agent's gain or loss of utility is exactly balanced
by the losses or gains of utility of another agent.
o One player of the game try to maximize one single value, while other
player tries to minimize it.
o Each move by one player in the game is called as ply.
o Chess and tic-tac-toe are examples of a Zero-sum game.

46
Zero-sum game: Embedded thinking

The Zero-sum game involved embedded thinking in which one agent or player is
trying to figure out:

o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do

Each of the players is trying to find out the response of his opponent to their
actions. This requires embedded thinking or backward reasoning to solve the
game problems in AI.

Formalization of the problem:


o Initial state: It specifies how the game is set up at the start.
o Player(s): It specifies which player has moved in the state space.
o Action(s): It returns the set of legal moves in state space.
o Result(s, a): It is the transition model, which specifies the result of
moves in the state space.
o Terminal-Test(s): Terminal test is true if the game is over, else it is false
at any case. The state where the game ends is called terminal states.
o Utility(s, p): A utility function gives the final numeric value for a game
that ends in terminal states s for player p. It is also called payoff function.
For Chess, the outcomes are a win, loss, or draw and its payoff values are
+1, 0, ½. And for tic-tac-toe, utility values are +1, -1, and 0.

Game tree:

47
A game tree is a tree where nodes of the tree are the game states and Edges of
the tree are the moves by players. Game tree involves initial state, actions
function, and result Function.

Example: Tic-Tac-Toe game tree:

The following figure is showing part of the game-tree for tic-tac-toe game.
Following are some key points of the game:

o There are two players MAX and MIN.


o Players have an alternate turn and start with MAX.
o MAX maximizes the result of the game tree
o MIN minimizes the result.

48
Example Explanation:

o From the initial state, MAX has 9 possible moves as he starts first. MAX
place x and MIN place o, and both player plays alternatively until we
reach a leaf node where one player has three in a row or all squares are
filled.
o Both players will compute each node, minimax, the minimax value which
is the best achievable utility against an optimal adversary.
o Suppose both the players are well aware of the tic-tac-toe and playing the
best play. Each player is doing his best to prevent another one from
winning. MIN is acting against Max in the game.
o So in the game tree, we have a layer of Max, a layer of MIN, and each
layer is called as Ply. Max place x, then MIN puts o to prevent Max from
winning, and this game continues until the terminal node.
o In this either MIN wins, MAX wins, or it's a draw. This game-tree is the
whole search space of possibilities that MIN and MAX are playing tic-
tac-toe and taking turns alternately.

Hence adversarial Search for the minimax procedure works as follows:

o It aims to find the optimal strategy for MAX to win the game.
o It follows the approach of Depth-first search.
o In the game tree, optimal leaf node could appear at any depth of the tree.
o Propagate the minimax values up to the tree until the terminal node
discovered.

49
IX. CONSTRAINT SATISFACTION PROBLEMS (CSP):
Finding a solution that meets a set of constraints is the goal of constraint
satisfaction problems (CSPs), a type of AI issue. Finding values for a group of
variables that fulfill a set of restrictions or rules is the aim of constraint
satisfaction problems. For tasks including resource allocation, planning,
scheduling, and decision-making, CSPs are frequently employed in AI.

There are mainly three basic components in the constraint satisfaction


problem:
Variables: The things that need to be determined are variables. Variables in a
CSP are the objects that must have values assigned to them in order to satisfy a
particular set of constraints. Boolean, integer, and categorical variables are just
a few examples of the various types of variables Variables, for instance, could
stand in for the many puzzle cells that need to be filled with numbers in a
sudoku puzzle.
Domains: The range of potential values that a variable can have is represented
by domains. Depending on the issue, a domain may be finite or limitless. For
instance, in Sudoku, the set of numbers from 1 to 9 can serve as the domain of a
variable representing a problem cell.
Constraints: The guidelines that control how variables relate to one another are
known as constraints. Constraints in a CSP define the ranges of possible values
for variables. Unary constraints, binary constraints, and higher-order constraints
are only a few examples of the various sorts of constraints. For instance, in a
sudoku problem, the restrictions might be that each row, column, and 3×3 box
can only have one instance of each number from 1 to 9.

50
Constraint Satisfaction Problems (CSP) representation:
 The finite set of variables V1, V2, V3.....................................Vn.
 Non-empty domain for every single variable D1, D2, D3..............................Dn.
 The finite set of constraints C1, C2......................, Cm.
 where each constraint Ci restricts the possible values for variables,
 e.g., V1 ≠ V2
 Each constraint Ci is a pair <scope, relation>
 Example: <(V1, V2), V1 not equal to V2>
 Scope = set of variables that participate in constraint.
 Relation = list of valid variable value combinations.
 There might be a clear list of permitted combinations.
Perhaps a relation that is abstract and that allows for
membership testing and listing.
Constraint Satisfaction Problems (CSP) algorithms:
 The backtracking algorithm is a depth-first search algorithm that
methodically investigates the search space of potential solutions up until a
solution is discovered that satisfies all the restrictions. The method begins
by choosing a variable and giving it a value before repeatedly attempting to
give values to the other variables. The method returns to the prior variable
and tries a different value if at any time a variable cannot be given a value
that fulfills the requirements. Once all assignments have been tried or a
solution that satisfies all constraints has been discovered, the algorithm
ends.
 The forward-checking algorithm is a variation of the backtracking
algorithm that condenses the search space using a type of local consistency.
For each unassigned variable, the method keeps a list of remaining values
and applies local constraints to eliminate inconsistent values from these
sets. The algorithm examines a variable’s neighbors after it is given a value
to see whether any of its remaining values become inconsistent and

51
removes them from the sets if they do. The algorithm goes backward if, after
forward checking, a variable has no more values.
 Algorithms for propagating constraints are a class that uses local
consistency and inference to condense the search space. These algorithms
operate by propagating restrictions between variables and removing
inconsistent values from the variable domains using the information
obtained.
Implementations code for Constraint Satisfaction Problems (CSP):

class CSP:

def init (self, variables, Domains,constraints):

self.variables = variables

self.domains = Domains

self.constraints = constraints

self.solution = None

def solve(self):

assignment = {}

self.solution = self.backtrack(assignment)

return self.solution

52
def backtrack(self, assignment):

if len(assignment) == len(self.variables):

return assignment

var = self.select_unassigned_variable(assignment)

for value in self.order_domain_values(var, assignment):

if self.is_consistent(var, value, assignment):

assignment[var] = value

result = self.backtrack(assignment)

if result is not None:

return result

del

assignment[var]

return None

def select_unassigned_variable(self, assignment):

unassigned_vars = [var for var in self.variables if var

not

return min(unassigned_vars, key=lambda var:


len(self.domai
53
def order_domain_values(self, var,
assignment):
return
self.domains[var]

def is_consistent(self, var, value,


assignment):
for constraint_var in
self.constraints[var]:
if constraint_var in assignment and
assignment[constra
return
False
return
True

uzzle = [[5, 3, 0, 0, 7, 0, 0, 0, 0],

[6, 0, 0, 1, 9, 5, 0, 0, 0],

[0, 9, 8, 0, 0, 0, 0, 6, 0],

[8, 0, 0, 0, 6, 0, 0, 0, 3],

[4, 0, 0, 8, 0, 3, 0, 0, 1],

[7, 0, 0, 0, 2, 0, 0, 0, 6],

[0, 6, 0, 0, 0, 0, 2, 8, 0],

[0, 0, 0, 4, 1, 9, 0, 0, 5],

54
[0, 0, 0, 0, 8, 0, 0, 0, 0]

def print_sudoku(puzzle):

for i in range(9):

if i % 3 == 0 and i != 0:

print("----------------------")

for j in range(9):

if j % 3 == 0 and j != 0:

print(" | ", end="")

print(puzzle[i][j], end=" ")

print()

print_sudoku(puzzle)

55
Output:
5 3 0 | 0 7 0 | 0 0 0
6 0 0 | 1 9 5 | 0 0 0
0 9 8 | 0 0 0 | 0 6 0
- - - - - - - - - -
-
8 0 0 | 0 6 0 | 0 0 3
4 0 0 | 8 0 3 | 0 0 1
7 0 0 | 0 2 0 | 0 0 6
- - - - - - - - - -
-
0 6 0 | 0 0 0 | 2 8 0
0 0 0 | 4 1 9 | 0 0 5
0 0 0 | 0 8 0 | 0 0 0

56

You might also like