Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views16 pages

Unit 2 Part2

Constraint Satisfaction Problems (CSP) in artificial intelligence involve finding solutions to problems defined by variables, domains, and constraints. Techniques such as backtracking and constraint propagation are used to solve CSPs, while adversarial search applies to multi-agent environments in game playing. Game playing AI utilizes algorithms like Minimax and Alpha-Beta pruning to make optimal decisions in competitive scenarios.

Uploaded by

parasgarag123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views16 pages

Unit 2 Part2

Constraint Satisfaction Problems (CSP) in artificial intelligence involve finding solutions to problems defined by variables, domains, and constraints. Techniques such as backtracking and constraint propagation are used to solve CSPs, while adversarial search applies to multi-agent environments in game playing. Game playing AI utilizes algorithms like Minimax and Alpha-Beta pruning to make optimal decisions in competitive scenarios.

Uploaded by

parasgarag123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Constraint Satisfaction Problems:

CSP is a fundamental topic in artificial intelligence (AI) that deals with solving problems by
identifying constraints and finding solutions that satisfy those constraints.

A Constraint Satisfaction Problem in artificial intelligence involves a set of variables, each of


which has a domain of possible values, and a set of constraints that define the allowable
combinations of values for the variables. The goal is to find a value for each variable such that
all the constraints are satisfied.

 In constraint satisfaction, domains are the areas wherein variables were located after
the restrictions that are particular to the task.
 Those three components make up a constraint satisfaction technique in its entirety. The
pair "scope, rel" makes up the number of something like the requirement.
 The scope is a tuple of variables that contribute to the restriction, as well as rel is indeed
a relationship that contains a list of possible solutions for the parameters should assume
in order to meet the restrictions of something like the issue.

A State-space

Solving a CSP typically involves searching for a solution in the state space of possible
assignments to the variables. The state-space is a set of all possible configurations of variable
assignments, each of which is a potential solution to the problem.
Domain Categories within CSP

The domain of a variable in a Constraint satisfaction problem in artificial intelligence can be


categorized into three types: finite, infinite, and continuous.

 Finite domains have a finite number of possible values, such as colors or integers.

 Infinite domains have an infinite number of possible values, such as real numbers.

 Continuous domains have an infinite number of possible values, but they can be
represented by a finite set of parameters, such as the coefficients of a polynomial
function.

In mathematics, a continuous domain is a set of values that can be described as a continuous


range of real numbers. This means that there are no gaps or interruptions in the values
between any two points in the set.

Types of Constraints in CSP

Several types of constraints can be used in a Constraint satisfaction problem in artificial


intelligence, including:

 Unary Constraints:
A unary constraint is a constraint on a single variable. For example, Variable A not equal
to “Red”.
 Binary Constraints:
A binary constraint involves two variables and specifies a constraint on their values. For
example, a constraint that two tasks cannot be scheduled at the same time would be a
binary constraint.
 Global Constraints:
Global constraints involve more than two variables and specify complex relationships
between them. For example, a constraint that no two tasks can be scheduled at the
same time if they require the same resource would be a global constraint.

Example: Think of a Sudoku puzzle where some of the squares have initial fills of certain
integers.

You must complete the empty squares with numbers between 1 and 9, making sure that no
rows, columns, or blocks contains a recurring integer of any kind. This solving multi - objective
issue is pretty elementary. A problem must be solved while taking certain limitations into
consideration.
The integer range (1-9) that really can occupy the other spaces is referred to as a domain, while
the empty spaces themselves were referred as variables. Constraints are the rules that
determine how a variable will select the scope.

Crypt-Arithmetic Problem:
 It is a type of Constraint Satisfactory Problem in Artificial Intelligence.
 The Crypt-Arithmetic problem in AI is a type of encryption problem in which the written
message in an alphabetical form which is easily readable and understandable is
converted into a numeric form which is neither easily readable nor understandable.
 In simpler words, the crypt-arithmetic problem deals with the converting of the
message from the readable plain text to the non-readable ciphertext. The constraints
which this problem follows during the conversion is as follows:

1. A number 0-9 is assigned to a particular alphabet.


2. Each different alphabet has a unique number.
3. All the same, alphabets have the same numbers.
4. The numbers should satisfy all the operations that any normal number does.

Let us take an example of the message: SEND MORE MONEY.

Here, to convert it into numeric form, we first split each word separately and represent it as
follows:

SEND
MORE
-------------
MONEY

These alphabets then are replaced by numbers such that all the constraints are satisfied. So
initially we have all blank spaces.

These alphabets then are replaced by numbers such that all the constraints are satisfied. So
initially we have all blank spaces.

We first look for the MSB in the last word which is 'M' in the word 'MONEY' here. It is the letter
which is generated by carrying. So, carry generated can be only one. SO, we have M=1.

Now, we have S+M=O in the second column from the left side. Here M=1. Therefore, we
have, S+1=O. So, we need a number for S such that it generates a carry when added with 1. And
such a number is 9. Therefore, we have S=9 and O=0.
Now, in the next column from the same side we have E+O=N. Here we have O=0. Which
means E+0=N which is not possible. This means a carry was generated by the lower place digits.
So we have:

1+E=N ----------(i)

Next alphabets that we have are N+R=E -------(ii)

So, for satisfying both equations (i) and (ii), we get E=5 and N=6.

Now, R should be 9, but 9 is already assigned to S, So, R=8 and we have 1 as a carry which is
generated from the lower place digits.

Now, we have D+5=Y and this should generate a carry. Therefore, D should be greater than 4.
As 5, 6, 8 and 9 are already assigned, we have D=7 and therefore Y=2.

Therefore, the solution to the given Crypt-Arithmetic problem is:

S=9; E=5; N=6; D=7; M=1; O=0; R=8; Y=2

Which can be shown in layout form as:

9567
1085
-------------
10652

Constraint Propagation:

Constraint propagation is the process of communicating the domain reduction of a decision


variable to all of the constraints that are stated over this variable. This process can result in
more domain reductions.

These domain reductions, in turn, are communicated to the appropriate constraints. This
process continues until no more variable domains can be reduced or when a domain becomes
empty and a failure occurs.

Constraint propagation means the process of applying filtering to the constraints in the CSP
instance at hand.
Backtracking:

 Backtracking is one of the techniques that can be used to solve the problem. We can
write the algorithm using this strategy.

 It uses the Brute force search to solve the problem, and the brute force search says that
for the given problem, we try to make all the possible solutions and pick out the best
solution from all the desired solutions.

 Backtracking is not used in solving optimization problems. Backtracking is used when we


have multiple solutions, and we require all those solutions.

 Backtracking name itself suggests that we are going back and coming forward; if it
satisfies the condition, then return success, else we go back again. It is used to solve a
problem in which a sequence of objects is chosen from a specified set so that the
sequence satisfies some criteria.

When to use a Backtracking algorithm?

When we have multiple choices, then we make the decisions from the available choices. In the
following cases, we need to use the backtracking algorithm:

o A piece of sufficient information is not available to make the best choice, so we use the
backtracking strategy to try out all the possible solutions.
o Each decision leads to a new set of choices. Then again, we backtrack to make new
decisions. In this case, we need to use the backtracking strategy.

How does Backtracking work?

Backtracking is a systematic method of trying out various sequences of decisions until you find
out that works. Let's understand through an example.

We start with a start node. First, we move to node A. Since it is not a feasible solution so we
move to the next node, i.e., B. B is also not a feasible solution, and it is a dead-end so we
backtrack from node B to node A
Suppose another path exists from node A to node C. So, we move from node A to node C. It is
also a dead-end, so again backtrack from node C to node A. We move from node A to the
starting node.

Now we will check any other path exists from the starting node. So, we move from start node
to the node D. Since it is not a feasible solution so we move from node D to node E. The node E
is also not a feasible solution. It is a dead end so we backtrack from node E to node D.

Suppose another path exists from node D to node F. So, we move from node D to node F. Since
it is not a feasible solution and it's a dead-end, we check for another path from node F.
Suppose there is another path exists from the node F to node G so move from node F to node
G. The node G is a success node.

The terms related to the backtracking are:

o Live node: The nodes that can be further generated are known as live nodes.
o E node: The nodes whose children are being generated and become a success node.
o Success node: The node is said to be a success node if it provides a feasible solution.
o Dead node: The node which cannot be further generated and also does not provide a
feasible solution is known as a dead node.

Applications of Backtracking
o N-queen problem
o Sum of subset problem
o Graph coloring
o Hamiliton cycle

Adversarial Search
 Adversarial search is a search, where we examine the problem which arises when we try
to plan ahead of the world and other agents are planning against us.
 There might be some situations where more than one agent is searching for the solution in the
same search space, and this situation usually occurs in game playing.
 The environment with more than one agent is termed as multi-agent environment, in which each
agent is an opponent of other agent and playing against each other. Each agent needs to consider
the action of other agent and effect of that action on their performance.
 So, Searches in which two or more players with conflicting goals are trying to explore the
same search space for the solution, are called adversarial searches, often known as
Games.

Types of Games in AI:

 Perfect information: A game with the perfect information is that in which agents can look into
the complete board. Agents have all the information about the game, and they can see each
other moves also. Examples are Chess, Checkers, Go, etc.
 Imperfect information: If in a game agents do not have all information about the game and not
aware with what's going on, such type of games are called the game with imperfect information,
such as tic-tac-toe, Battleship, blind, Bridge, etc.
 Deterministic games: Deterministic games are those games which follow a strict pattern and
set of rules for the games, and there is no randomness associated with them. Examples are
chess, Checkers, Go, tic-tac-toe, etc.
 Non-deterministic games: Non-deterministic are those games which have various unpredictable
events and has a factor of chance or luck. This factor of chance or luck is introduced by either dice
or cards. These are random, and each action response is not fixed. Such games are also called as
stochastic games.
Example: Backgammon, Monopoly, Poker, etc.

Game Playing:

Game playing is a popular application of artificial intelligence that involves the development
of computer programs to play games, such as chess, checkers, or Go. The goal of game
playing in artificial intelligence is to develop algorithms that can learn how to play games
and make decisions that will lead to winning outcomes.
Searching techniques like BFS (Breadth First Search) are not accurate for this as the
branching factor is very high, so searching will take a lot of time. So, we need another search
procedures that improve –
 Generate procedure so that only good moves are generated.
 Test procedure so that the best move can be explored first.
One of the earliest examples of successful game playing AI is the chess program Deep Blue,
developed by IBM, which defeated the world champion Garry Kasparov in 1997.

There are two main approaches to game playing in AI, rule-based systems and machine
learning-based systems.
1. Rule-based systems use a set of fixed rules to play the game.
2. Machine learning-based systems use algorithms to learn from experience and make
decisions based on that experience. For example, AlphaGo, developed by DeepMind, was
the first machine learning-based system to defeat a world champion in the game of Go.

The most common search technique in game playing is Minimax search procedure. It is
depth-first depth-limited search procedure. It is used for games like chess and tic-tac-toe.

Minimax algorithm uses two functions –

MOVEGEN: It generates all the possible moves that can be generated from the current
position.

STATICEVALUATION: It returns a value depending upon the goodness from the viewpoint of
two-player.

This algorithm is a two player game, so we call the first player as PLAYER1 and second player
as PLAYER2. The value of each node is backed-up from its children. For PLAYER1 the backed-
up value is the maximum value of its children and for PLAYER2 the backed-up value is the
minimum value of its children. It provides most promising move to PLAYER1, assuming that
the PLAYER2 has make the best move. It is a recursive algorithm, as same procedure occurs
at each level.
Figure 1: Before backing-up of values

Figure 1: Before backing-up of values

Figure 2: After backing-up of values We assume that PLAYER1 will start the game.
4 levels are generated. The value to nodes H, I, J, K, L, M, N, O is provided by
STATICEVALUATION function. Level 3 is maximizing level, so all nodes of level 3 will take
maximum values of their children. Level 2 is minimizing level, so all its nodes will take
minimum values of their children. This process continues. The value of A is 23. That means A
should choose C move to win.
Advantages of Game Playing in Artificial Intelligence:

1. Advancement of AI: Game playing has been a driving force behind the development of
artificial intelligence and has led to the creation of new algorithms and techniques that
can be applied to other areas of AI.
2. Education and training: Game playing can be used to teach AI techniques and algorithms
to students and professionals, as well as to provide training for military and emergency
response personnel.
3. Research: Game playing is an active area of research in AI and provides an opportunity to
study and develop new techniques for decision-making and problem-solving.
4. Real-world applications: The techniques and algorithms developed for game playing can
be applied to real-world applications, such as robotics, autonomous systems, and decision
support systems.

Disadvantages of Game Playing in Artificial Intelligence:

1. Limited scope: The techniques and algorithms developed for game playing may not be
well-suited for other types of applications and may need to be adapted or modified for
different domains.
2. Computational cost: Game playing can be computationally expensive, especially for
complex games such as chess or Go, and may require powerful computers to achieve
real-time performance.

Alpha-Beta Pruning:

 Alpha-beta pruning is a modified version of the minimax algorithm. It is an


optimization technique for the minimax algorithm.
 It is a technique by which without checking each node of the game tree we can
compute the correct minimax decision, and this technique is called pruning. This
involves two threshold parameter Alpha and beta for future expansion, so it is
called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
 It can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.
 The two-parameter can be defined as:

a. Alpha: The best (highest-value) choice we have found so far at any point along
the path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along the
path of Minimizer. The initial value of beta is +∞.
Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is:

α>=β

Key points about alpha-beta pruning:


o The Max player will only update the value of alpha.
o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper nodes instead of
values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-beta
pruning

Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β=
+∞, these value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and
Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and
node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of
Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3,
hence at node B now α= -∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B which is node E, and the
values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value
of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where
α>=β, so the right successor of E will be pruned, and algorithm will not traverse it, and the value
at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the
value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞,
these two values now passes to right successor of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α remains 3,
but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta
will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it
satisfies the condition α>=β, so the next child of C which is G will be pruned, and the algorithm
will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is
the final game tree which is the showing the nodes which are computed and nodes which has
never computed. Hence the optimal value for the maximizer is 3 for this example.

You might also like