Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views85 pages

Chapter - 3 Searching and Planning

Chapter 3 discusses problem-solving through searching in artificial intelligence, covering key concepts such as state, action, and goal state. It outlines the roles of problem-solving agents, the formulation of goals and problems, and various search strategies including uninformed and informed searches. The chapter also emphasizes the importance of search algorithms in finding efficient solutions and the evaluation of their effectiveness based on completeness, optimality, and resource usage.

Uploaded by

ket1boggood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views85 pages

Chapter - 3 Searching and Planning

Chapter 3 discusses problem-solving through searching in artificial intelligence, covering key concepts such as state, action, and goal state. It outlines the roles of problem-solving agents, the formulation of goals and problems, and various search strategies including uninformed and informed searches. The chapter also emphasizes the importance of search algorithms in finding efficient solutions and the evaluation of their effectiveness based on completeness, optimality, and resource usage.

Uploaded by

ket1boggood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

1

Chapter 3
Searching and Planning
Topics we will cover
2

• Solving Problems by Searching


• Problem Solving Agents
• Problem Formulation
• Search Strategies
1. Uninformed search strategies
2. Informed search strategies
• Avoiding Repeated States
• Constraint Satisfaction Problem
• Games as Search Problems
Solving Problems by Searching
3

• Search is one of the most fundamental techniques in artificial intelligence. It is


used to solve problems by exploring a set of possible states (or configurations) to
find a path from the initial state to the goal state.
• Search algorithms are widely used in applications such as path finding, puzzle
solving, and decision-making.
• Key Concepts:
• State: A representation of the current situation in the problem.
• Action: A transition from one state to another.
• State Space: The set of all possible states reachable from the initial state.
• Goal State: The state that satisfies the problem's objective.
• Path Cost: The cost associated with reaching a particular state from the
initial state (e.g., time, distance, or resources).
Problem-solving agents
4

• Problem-solving agents are a goal-based agent that are supposed to act in such a
way that the environment goes through a sequence of states that maximizes the
performance measure.
• However, designing such an agent can be challenging because the task of
maximizing performance is often too abstract. To simplify, the agent can adopt a
goal and focus on achieving it.
• Example: Traveling from Jimma to Adama, i.e imagine an agent in Jimma that
wants to travel to Adama. The agent must consider various factors, such as:
– Cost: How much will the journey cost?
– Distance: How far is the journey?
– Speed: How quickly can the agent reach Adama?
– Comfort: How comfortable is the journey?
Cont…
5

• By setting a clear goal (e.g., reaching Adama), the agent can organize its
behavior and focus on achieving that specific objective. This process begins
with goal formulation, where the agent defines what it wants to achieve based
on the current situation.
• What is goal formulation? What about problem formulation?
Goal Formulation:
• The agent defines its goal, such as reaching Adama.
• A goal is a set of states where the desired condition is satisfied (e.g., being in
Adama).
Problem Formulation:
• The agent decides what actions and states to consider.
• For example, the agent might consider actions like driving from one city to
another and states like being in specific towns along the way.
How Does the Agent Decide What to Do?
6

• Suppose our previous example in which the agent is in Jimma and wants to reach
Adama. Assume there are three roads leaving Jimma, but none of them lead
directly to Adama. What should the agent do?
Without Knowledge:
• If the agent has no information about the geography, it might randomly choose
one of the roads.
With Knowledge:
• If the agent has a map, it can use the map to explore possible routes and plan a
sequence of actions that will eventually lead to Adama.
• The map provides information about:
– States: The towns the agent might pass through.
– Actions: The roads the agent can take to move between towns.
The Role of Search
7

• When the agent has multiple options but doesn’t know their value, it can:
1. Examine different sequences of actions that lead to states of known value.
2. Choose the best sequence based on its goal (e.g., shortest distance, lowest
cost).
• This process is called search. A search algorithm takes a problem as input and
returns a solution in the form of a sequence of actions.
• Once a solution is found, the agent can execute the recommended actions. This is
called the execution phase.
The Problem-Solving Process
8

In general, the problem-solving agent follows a three-step process:


1. Formulate the Problem:
– Define the initial state, goal state, actions, and path cost.
– Example: Initial state = Jimma, Goal state = Adama, Actions = Driving
between towns.
2. Search for a Solution:
– Use a search algorithm to find a sequence of actions that transforms the initial
state into the goal state.
3. Execute the Solution:
– Carry out the actions recommended by the search algorithm.
State Space
9

• The state space is the set of all states reachable from the initial state.
• The agent’s goal is to perform a sequence of actions that transforms the
environment from the initial state to one of the goal states.
• Example: Suppose an agent is in Arad and wants to reach Bucharest. What
sequence of actions will lead to the agent achieving its goal?
• That is, the agent must determine a sequence of actions (e.g., driving through
specific towns) that will lead it to Bucharest.
• So, the state space includes all possible towns the agent can pass through, and the
search algorithm helps the agent find the best path.
Example: Romania (1)
10

Figure - A Simplified road map of part of Romania.


Example: Romania (2)
11

• Formulate goal:
• Be in Bucharest
• Formulate problem:
• States: various cities
• Actions: drive between cities
• Find solution:
• sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

• A solution to a problem is a path from the initial state to a goal state.


• But, how the solution quality or performance of problem-solving agent will
measured?
Cont...
12

• The effectiveness of a search can be measured in at least three ways:


 Does it find a solution?
 Is it a good solution (low cost)?
 What is the time and memory required to find a solution (search cost)?
Cont….
13
Example: Vacuum world state space graph
14
Aim: Understanding the state space for
vacuum cleaner world.

Note: R - Right
L - Left
S - Suck

• states? One of two locations, each of which may contain dirt =>


2x22=8 possible states
actions?

Left, Right, Suck
goal test?

No dirt at all locations
path cost?
1 per action
• So such a set of all the possible state for a problem is called the state space.
Example: Vacuum world state space graph
15

• The possible world state that we


have in a vacuum cleaner problem
is given by n * 2n states. Where,
n refers to number of room.
• How many states we would have
if the number of room is 4 rather
than 2?
4 * 24 = 64
• So such a set of all the possible
state for a problem is called the
state space.
• On the right side the state space
for romania problem is given
Example: The 8-puzzle
16

• states? locations of tiles & blank space => 9! different states

• actions? move blank left, right, up, down

• goal test? = goal state (given)

• path cost? 1 per move


Tree search algorithms
17

• Search algorithms are one of the most important areas of Artificial


Intelligence.
• They are important in solving search problems.
• A search problem consists of a search space, start state, and goal state.
• Search Space: represents a set of possible solutions, which a system
may have.
• Start State: is a state from where agent begins the search.
• Goal test/state: is a function which observe the current state and returns
whether the goal state is achieved or not.
• Search algorithms help the AI agents to attain goal state through the
assessment of scenarios and alternatives.
Cont…
18
• If the node represents a goal state we stop searching.
• Else we expand the selected node(generate its possible successors using
the successor function) and add the successors as child nodes of the
selected node.
• Following are the four essential properties of search algorithms to
compare the efficiency of these algorithms:
o Completeness: Complete if it guarantees to return a solution(if at least any
solution exists for any random input).
o Optimality: Optimal if a solution found is best (lowest path cost) among all the
solutions identified.
o Time Complexity: If the algorithm completes a task in a lesser amount of time,
then it is an efficient one.
o Space Complexity: It is the maximum storage or memory taken by the algorithm.
Cont…
19

• Time and space are measured in terms of:


• b: maximum branching factor (maximum number of successors of any
node) of the search tree. E.g. the branching factor from Arad is 3.
• d: depth of the least-cost solution.
• m: maximum depth of the state space (may be ∞).

Start node

Goal
Tree search example (1)
20

• Partial search trees for finding a route from Arad to Bucharest.


• Nodes that have been expanded are shaded;
Tree search example (2)
21
• Partial search trees for finding a route from Arad to Bucharest.
• Nodes that have been expanded are shaded;
• Nodes that have been generated but not yet expanded are outlined
in bold;
Tree search example (3)
22

• Partial search trees for finding a route from Arad to Bucharest.


• Nodes that have been expanded are shaded;
• Nodes that have been generated but not yet expanded are outlined
in bold;
• Nodes that have not yet been generated are shown in faint dashed
lines.

Continues this way...


Example 2: Route finding Problem
23

• A search tree is a representation in which nodes denote paths and branches


connect paths.
• The node with no parent is the root node.
• The nodes with no children are called leaf nodes.
• Example 2: Partial search tree for route finding from Saris to Main campus.
Main Campus goal test
(a) The initial state
Saris generating a new state
(b) After expanding Saris
choosing one option Mercato JiT campus Gabriel
Saris

(c) After expanding Mercato Mercato JiT campus Gabriel

Main Campus Frustale Gabriel, Agp….


Example 2: Route finding Problem
24

• Partial search tree for route finding from Sidist Kilo to Stadium.

Sidist Kilo goal test


(a) The initial state

Sidist Kilo
(b) After expanding Sidist Kilo generating a new state

choosing one option Arat Kilo Giorgis ShiroMeda


SidistKilo

(c) After expanding Arat Kilo Arat Kilo Giorgis ShiroMeda

MeskelSquare
Piassa Megenagna
(Stadium)
Searching strategies
25
• Search strategy gives the order in which the search space is examined.
1. Uninformed (= blind) search
• They do not need domain knowledge that guide them towards the goal.
• Have no information about the number of steps or the path cost from the
current state to the goal.
• It is important for problems for which there is no additional information to
consider.
2. Informed (= heuristic) search
• Have problem-specific knowledge (knowledge that is true from experience).
• Have knowledge about how far are the various state from the goal.
• Can find solutions more efficiently than uninformed search.
Search methods types:
26

• Uninformed search
• Breadth first search
• Depth first search
• Uniform cost search,
• Depth limited search
• Iterative deepening search and etc.
• Informed search
• Greedy search
• A*-search
• Iterative improvement,
• Constraint satisfaction, etc.
Uninformed search strategies
27

• The simplest type of tree search algorithm is called uninformed, or blind, tree
search.
• Uninformed search strategies use only the information available in the problem
definition.
• They have no additional information about the distance from the current state to
the goal.
• Breadth-first search
• Depth-first search
• Uniform-cost search
• Depth-limited search
• Iterative deepening search
Breadth-first search (1)
28

• In breadth-first search we always select the minimum depth node for


expansion.
• Expand shallowest(least depth) unexpanded node.
• Implementation:
• It uses queue data structure(FIFO approach), i.e., new successors
go at end.
Breadth-first search (2)
29

• Expand shallowest unexpanded node


• Implementation:
• It uses queue data structure(FIFO approach), i.e., new successors go
at end.
Breadth-first search (3)
30

• Expand shallowest unexpanded node


• Implementation:
• It uses queue data structure(FIFO approach), i.e., new successors go at
end.
Breadth-first search (4)
31

• Expand shallowest unexpanded node


• Implementation:
• It uses queue data structure(FIFO approach), i.e., new successors go at
end.

• Continues this way...


Exercise
32

• Apply BFS to find an optimal path from start node to Goal node.
• S is Start node and
• G is Goal node.
Depth-first search (1)
33

• Expand deepest unexpanded node.


Depth-first search (1)
34

• Expand deepest unexpanded node.


• Implementation:
• It uses stack data structure(LIFO approach) i.e., put successors at
front.
Depth-first search (2)
35

• Expand deepest unexpanded node


• Implementation:
• It uses stack data structure(LIFO approach) i.e., put successors at
front.
Depth-first search (3)
36

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search (4)
37

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search (5)
38

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search (6)
39

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search (7)
40

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search (8)
41

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search (9)
42

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search (10)
43

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search (11)
44

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search (12)
45

• Expand deepest unexpanded node


• Implementation:
• fringe = LIFO stack, i.e., put successors at front
Depth-first search Vs. Breadth-first search
46
Uniform cost Search
47
• Finds the shortest path to the goal in terms of cost.
• It modifies BFS by expanding least-cost unexpanded node first.
• It is used for traversing a weighted tree or graph.
• Implementation:
• fringe = queue ordered by path cost
A
S S S
1 10 S
S 5 B 5 G 0 A B C A B C A B C
1 5 15 5 15 15
1 G G G
5
5 11 11 10
C

Properties:
· Equivalent to breadth-first if step costs all equal.
· This strategy finds the cheapest solution.
· It does not care about the number of steps involved in searching and only concerned
about path cost. Due to which this algorithm may be stuck in an infinite loop.
Cont...
48

• Look at the following example(2) on how UCS works.

• S=start node, G=goal node


• From node S we look for a
node to expand and we have
nodes A and G but since it’s
a uniform cost search it’s
expanding the node with the
lowest step cost.
• So, node A becomes the successor rather than our required goal node G.
• From A we look at its children nodes B and C. Since, C has the lowest step
cost it traverses through node C and then we look at successors of C i.e. D and
G. Since, the cost to D is low we expand along with the node D.
Cont...
49

• D has only one child G which is our


required goal state with path cost of
6, !
• but, the cheapest is through C→G = 4
leading to the goal state G by
implementing uniform cost search
Algorithm. i.e S→ A→ C→ G

• If we have traversed this way definitely our total path cost from S to G is
just 4 even after traversing through many nodes rather than going to G directly
where the cost is 12 and 4 << 12 (in terms of step cost).
Depth-limited search
50

• Depth-first search is not complete for unbounded trees.


• Depth-limited search overcomes this by:
• Performing depth-first search but with depth limit given by l.
i.e., it never expand nodes at depth l
• Unfortunately, this introduces a new source of incompleteness …
• What if the solution is at a depth greater than l ?
• Also, depth-limited search is non-optimal if l > d i.e if solution is above cut-
off depth l. That is, when depth of the solution (d) is smaller than the depth
limit given by l.
Properties of DLS(Describe more…)
51

• Depth-limited search overcomes this:


• E.g depth-first search with depth limit l=2
i.e., never expand nodes at depth l=2.

• Properties
• Incomplete if solution is below cut-off depth l,
that is if l<d
• Not optimal if solution is above cut-off depth l,
that is if l>d
Iterative deepening search
52

• Iteratively run depth-limited search.

• Gradually increase depth cut-off.

• It finds the best depth limit.

• It does this by gradually increasing the limit— first 0, then 1, then 2, and
so on - until a goal is found.
Iterative deepening search, l =0
53

At L=0, the start node is goal-tested but no nodes are expanded.


This is so that you can solve trick problems like, “Starting in
Arad, go to Arad.”

1'st Iteration-----> A
Iterative deepening search l =1
54

At L=1, the start node is expanded. Its children are goal-tested,


but not expanded. Recall that to expand a node means to
generate its children.

2'nd Iteration----> A, B, C
Iterative deepening search l =2
55

At L=2, the start node and its children are expanded. Its
grand-children are goal-tested, but not expanded.

3'rd Iteration------>A, B, D, E, C, F, G
Iterative deepening search l =3
56

At L=3, the start node, its children, and its grand-children are
expanded. Its great-grandchildren are goal-tested, but not expanded.
4'rth Iteration------>A, B, D, H, l, E, J, K, C, F, L, M, G
Bidirectional search
57

• Sometime it is possible to reduce complexity by searching in 2 directions


at once.
• It runs two simultaneous search's:
• Forward search - from initial state
• Backward search - from goal state
• Usually done with breadth-first searches.
• Before expanding node, check if it is in fringe of other search.
• Can be useful, but:
• Bad space complexity (two searches in memory)
Cont...
58
• The search stops when the two search's intersects each other. i.e. Forward and
backward search
• Only need to go to half depth.
• It can enormously reduce time complexity, but is not always applicable.
• Note that if a heuristic function is inaccurate, the two searches might miss one
another.
Exercise: Uniform Cost Search
59

• Assume that node 3 is the initial state and node 4 is the goal
state.
Exercise: Uninformed Search Strategies
60

• Assume that S is Start node and G is Goal node.


S
1 5 8

A B C
3 9
7 4 5
D E G
• BFS: S-A-B-C-D-E-G DFS: S-A-D-E-G
• UCS: S-B-G DLS: S-A-D-E-G(L=2) same with
DFS due to L=2 equivalent to max depth of tree.
Informed search strategies(Heuristic)
61

• Informed Search is another technique that has additional information


about the estimate distance from the current state to the goal.
• It equips the AI with guidance regarding how and where it can find the
problem’s solution.
• Greedy search
• A*-search
• Iterative improvement,
• etc.
Greedy Search
62

• A greedy algorithm is an approach for solving a problem by selecting the


best option available at the moment.
• It doesn't worry whether the current best result will bring the overall
optimal result.
• For example, let us see how this works for route-finding problems in
Romania. What information can we use to estimate the actual road
distance from a city to Bucharest?
• One possible answer is to use the straight-line distance, SLD from each
city to Bucharest. Table 1 shows a list of all these distances.
 Each has a heuristic function hSLD(n).
Greedy Search...
63

Figure 1: The state space of the Romania problem.


Table 1 – Values of hSLD(n) – the straight-line distance to Bucharest
64

City Straight-line
distance, hSLD(n) City Straight-line
Arad 366 distance, hSLD(n)
Bucharest 0 Mehadia 241
Neamt 234
Craiova 160
Oradea 380
Drobeta 242 Pitesti 100
Eforie 161 Rimnicu Vilcea 193
Fagaras 176
Giurgiu 77 Sibiu 253
Timisoara 329
Hirsova 151
Urziceni 80
Iasi 226 Vaslui 199
Lugoj 244 Zerind 374
Greedy Search...
65

• Using this information(lowest value of hSLD(n)), the greedy best-first


search algorithm will select a node for expansion.
• Let us step through the greedy best-first algorithm when applied to the
problem of finding a path from Arad to Bucharest:
Step1:
• Fringe=[Arad]
• Lowest value of heuristic function hSLD(Arad)=366
• Action: expand Arad
Greedy Search...
66

Step 2:
· Fringe=[Sibiu,Timisoara,Zerind]
· Lowest value of heuristic function hSLD(Sibiu)=253
· Action: expand Sibiu
Greedy Search...
67

Step 3:
· Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Rimnicu Vilcea]
· Lowest value of heuristic function hSLD(Fagaras)=176
· Action: expand Fagaras
Greedy Search... ...
68

Step 4:
· Fringe=[Timisoara,Zerind,Arad,Oradea,Rimnicu
Vilcea,Sibiu,Bucharest]
· Lowest value of heuristic function hSLD(Bucharest)=0
· Action: find goal at Bucharest!
Greedy Search... ...
69

· The Greedy search finds a solution without ever expanding a node


that is not on the solution path.
· It may not find the shortest path (Not optimal) and can get stuck
in loops.
A* Search
70
• A* (pronounced “A star”) search is similar to greedy best-first search,
except that it also takes into account the actual path cost taken so far to
reach each node.
f(n) = g(n) + h(n)
where g(n) = total actual path cost to get to node n
h(n) = estimated path cost to get from node n to goal.
• Example: Lets see the execution steps of A* search for reaching from
Arad to Bucharest.
Step1:
• Fringe=[Arad]
• Lowest value of evaluation function f(Arad)=0+366=366
• Action: expand Arad
A* Search...
71

Step 2:
• Fringe=[Sibiu,Timisoara,Zerind]
• Lowest value of evaluation function f(Sibiu)=140+253=393
• Action: expand Sibiu
A* Search...
72

Step 3:
• Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Rimnicu Vilcea]
• Lowest value of evaluation function f(Rimnicu Vilcea)=220+193=413
• Action: expand Rimnicu Vilcea
A* Search...
73

Step 4:
• Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Craiova,Pitesti,Sibiu]
• Lowest value of evaluation function f( Fagaras)=239+176=415
• Action: expand Fagaras
A* Search...
74

Step 5:
• Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Pitesti,Sibiu,Sibiu,Buch
arest]
• Lowest value of evaluation function f(Pitesti)=317+100=417
• Action: expand Pitesti
A* Search...
75

Step 6:
• Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Sibiu,Bucharest,Craiov
a,Rimnicu Vilcea]
• Lowest value of evaluation function f(Bucharest)=418+0=418
• Action: find goal at Bucharest
A* Search...
76

• A* search finds the shortest path (optimal).


• But it can be slower than Greedy Search due to the additional cost
calculation.
Constraint Satisfaction Problems (CSPs)
77

• A constraint satisfaction problem consists of three components, X, D,


and C:
– X is a set of variables, {Xi, ... , Xn ,}.
– D is a set of domains, {Di, , Dn .}, one for each variable.
– C is a set of constraints that specify allowable combinations of
values.
• In Constraint satisfaction problems(CSPs):
• State is defined by variables Xi with values from domain Di.
• Goal Test is a set of constraints specifying allowable
combinations of values for subsets of variables.
Example: Map-Coloring (1)
78

• As an example we will consider the map-colouring problem.


• No two adjacent regions have the same colour.

Figure: A map showing the


regions of Australia

• Variables: WA, NT, Q, NSW, V, SA, T


• Domains: Di = {red, green, blue}
• Constraints: adjacent regions must have different colors, e.g.
WA ≠ NT, or (WA,NT) in {(red, green),(red, blue),(green, red),
(green, blue),(blue, red),(blue, green)}
Example: Map-Coloring (2)
79

• Here we introduce two pieces of important terminology:


• Complete: a complete state is one in which all variables have
been assigned a value.
• Consistent: a consistent state is one that does not violate any of
the specified constraints.

• Solutions are complete and consistent assignments, e.g., WA=red,


NT=green, Q=red, NSW=green, V=red, SA=blue, T=green
Example: Map Coloring
80
• Color a map with three colors so that adjacent countries have different
colors ({red, green, blue}.
• Assume that initially region B is colored with Red and Region C with Blue.

A C variables:
G A, B, C, D, E, F, G
B
Constraints: “no

?
D neighboring regions have
the same color”
?

F
E
?

?
Exercise: Coloring Problem
81

• Use at most three colors (red, green and blue) to color the given
map, map(A)?
Map – A CSP-graph
Constraint Graph
82

• Another way of visualising CSPs is by using a constraint graph.


• Constraint graph: nodes are variables, arcs are constraints.
– The nodes of the graph correspond to variables of the problem,
and a link connects any two variables that participate is a
constraint.
• Binary CSP: each constraint relates(links) only two variables.
Varieties of CSPs
83
• It is possible to categorise CSPs based on the type of variable they
use:
1. Discrete variables (Discrete CSP)
• In a discrete CSP, the variables take on discrete values.
• There are two types of discrete CSP based on the size of their domains:
a. Finite domains: means can take on a limited number of discrete
values.
– For example: the {red, green, blue} or {yes, no} domain
b. Infinite domains: Some discrete variables can have infinite domains.
– For example: integers, natural numbers, strings, etc.
2. Continuous variables (Continuous CSP)
• In a continuous CSP, variables can take one from a continuous range of
values. For example, real numbers are continuous.
Varieties of CSPs
84

• We can also categorise CSPs based on the type of their constraints.


• There are three types of constraint we can have:
1. Unary constraints involve a single variable,
• Restricts the value of a single variable.
e.g., SA ≠ green I.e t South Australians dislike the color green.
2. Binary constraints involve pairs of variables,
e.g., SA ≠ WA ….(eg: map coloring slide #77)
3. Higher-order constraints involve 3 or more variables
• The more variables that are involved in a constraint, the harder the
problem is to solve.
• Problems with higher-order constraints are the hardest class of
problems.
85

Any question?

You might also like