Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views34 pages

Lec06-Advanced Constraint Satisfication Problem

The document discusses advanced concepts in Artificial Intelligence, particularly focusing on Constraint Satisfaction Problems (CSPs) and various solution techniques such as backtracking search, local search, and simulated annealing. It covers the importance of arc consistency, iterative improvement, and genetic algorithms in solving CSPs. Additionally, it highlights the theoretical guarantees of simulated annealing and the application of genetic algorithms in optimization problems.

Uploaded by

gmar129877
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views34 pages

Lec06-Advanced Constraint Satisfication Problem

The document discusses advanced concepts in Artificial Intelligence, particularly focusing on Constraint Satisfaction Problems (CSPs) and various solution techniques such as backtracking search, local search, and simulated annealing. It covers the importance of arc consistency, iterative improvement, and genetic algorithms in solving CSPs. Additionally, it highlights the theoretical guarantees of simulated annealing and the application of genetic algorithms in optimization problems.

Uploaded by

gmar129877
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

‫هوش مصنوعی و کارگاه‬

‫دانشکده ریاضی و علوم کامپیوتر‬


‫دانشگاه صنعتی امیرکبیر‬
‫مدرس کالس‪ :‬مهدی قطعی‬
‫استاد گروه علوم کامپیوتر‬
‫‪www.aut.ac.ir‬‬

‫مدرس کارگاه‪ :‬بهنام یوسفی مهر‬


‫دانشجوی دکتری علوم کامپیوتر‬
Artificial Intelligence & Workshop
o Advanced Constraint Satisfaction Problems
CS 188: Artificial Intelligence
Constraint Satisfaction Problems II
Instructor: Anca Dragan
University of California, Berkeley

Slides by Dan Klein, Pieter Abbeel, Anca Dragan (ai.berkeley.edu)


Today

o Efficient Solution of CSPs

o Local Search
Constraint Satisfaction Problems

N
variables
domain D
constraint
s
x2
x1
Standard Search Formulation
o Standard search formulation of
CSPs

o States defined by the values


assigned so far (partial
assignments)
o Initial state: the empty
assignment, {}
o Successor function: assign a value
to an unassigned variable
o Goal test: the current assignment
is complete and satisfies all
constraints

o We started with the


Backtracking Search
Backtracking Search

1. fix ordering
2. check constraints as
you go

[Demo: coloring --
Explain it to your neighbor!

Why is it ok to fix the ordering of


variables?
Why is it good to fix the ordering of variables?
Filtering

Keep track of domains for unassigned variables and cross off bad
Consistency of A Single Arc
o An arc X  Y is consistent iff for every x in the tail there is some y in
the head which could be assigned without violating a constraint

NT Q
WA
SA
NSW
V

Delete from the tail!


Arc Consistency of an Entire CSP
o A simple form of propagation makes sure all arcs are consistent:

NT Q
WA SA
NSW
V

o Important: If X loses a value, neighbors of X need to be rechecked!


o Arc consistency detects failure earlier than forward checking
o Can be run as a preprocessor or after each assignment Remember: Delete
from the tail!
Enforcing Arc Consistency in a CSP
Limitations of Arc Consistency

o After enforcing arc


consistency:
o Can have one solution left
o Can have multiple
solutions left
o Can have no solutions left
(and not know it)

o Arc consistency still runs


inside a backtracking [Demo: coloring -- forward checking]
[Demo: coloring -- arc consistency]
K-Consistency
o Increasing degrees of consistency
o 1-Consistency (Node Consistency): Each single node’s
domain has a value which meets that node’s unary
constraints
o 2-Consistency (Arc Consistency): For each pair of
nodes, any consistent assignment to one can be
extended to the other
o K-Consistency: For each k nodes, any consistent
assignment to k-1 can be extended to the kth node.

o Higher k more expensive to compute

o (You need to know the k=2 case: arc


consistency)
Ordering
Ordering: Least Constraining Value
o Value Ordering: Least Constraining
Value
o Given a choice of variable, choose the
least constraining value
o I.e., the one that rules out the fewest
values in the remaining variables
o Note that it may take some
computation to determine this! (E.g.,
rerunning filtering)

o Why least rather than most?

o Combining these ordering ideas


makes
Iterative Improvement
Iterative Algorithms for CSPs
o Local search methods typically work with “complete” states, i.e., all
variables assigned

o To apply to CSPs:
o Take an assignment with unsatisfied constraints
o Operators reassign variable values
o No fringe! Live on the edge.

o Algorithm: While not solved,


o Variable selection: randomly select any conflicted variable
o Value selection: min-conflicts heuristic:
o Choose a value that violates the fewest constraints
o I.e., hill climb with h(x) = total number of violated constraints
Local Search
Local Search
o Tree search keeps unexplored alternatives on the fringe (ensures
completeness)

o Local search: improve a single option until you can’t make it better
(no fringe!)

o New successor function: local changes

o Generally much faster and more memory efficient (but incomplete


and suboptimal)
Hill Climbing
o Simple, general idea:
o Start wherever
o Repeat: move to the best neighboring state
o If no neighbors better than current, quit

o What’s bad about this approach?

o What’s good about it?


Hill Climbing Diagram
Hill Climbing Quiz

Starting from X, where do you end up ?

Starting from Y, where do you end up ?

Starting from Z, where do you end up ?


25
Simulated Annealing
o Idea: Escape local maxima by
allowing downhill moves
o But make them rarer as time goes
on
o Simulated annealing is a
method for solving
unconstrained and bound-
constrained optimization
problems. The method models
the physical process of heating
a material and then slowly
lowering the temperature to
decrease defects, thus
minimizing the system energy.
Simulated Annealing
The state of some physical systems, and the function E(s) to be minimized, is analogous to the internal energy of
the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum
possible energy.

https://en.wikipedia.org/wiki/Simulated_annealing
Simulated Annealing
Overview
o The basic iteration: At each step, the simulated annealing heuristic considers some
neighboring state s* of the current state s, and probabilistically decides between
moving the system to state s* or staying in-state s.
o The neighbours of a state: Optimization of a solution involves evaluating the
neighbours of a state of the problem, which are new states produced through
conservatively altering a given state. For example, in the travelling salesman
problem each state is typically defined as a permutation of the cities to be visited,
and the neighbors of any state are the set of permutations produced by swapping
any two of these cities.
o Metaheuristics use the neighbours of a solution as a way to explore the solution
space, and although they prefer better neighbours, they also accept worse
neighbours in order to avoid getting stuck in local optima; they can find the global
optimum if run for a long enough amount of time.
o Acceptance probabilities: The probability of making the transition from the current
state s to a candidate new state is specified by an acceptance probability function
o T called the temperature. States with a smaller energy are better than those with a
Boltzmann distribution
o In statistical mechanics and mathematics, a Boltzmann distribution (also
called Gibbs distribution[1]) is a probability distribution or probability measure that
gives the probability that a system will be in a certain state as a function of that
state's energy and the temperature of the system. The distribution is expressed in
the form:

o where pi is the probability of the system being in state i, εi is the energy of that
state, and a constant kT of the distribution is the product of the Boltzmann constant
k and thermodynamic temperature T.
o The ratio of probabilities of two states is known as the Boltzmann factor and
characteristically only depends on the states' energy difference:
Simulated Annealing
o Idea: Escape local maxima by allowing downhill
moves
o But make them rarer as time goes on

When

30
Simulated Annealing
o Theoretical guarantee:
o Stationary distribution:
o If T decreased slowly enough,
will converge to optimal state!

o Is this an interesting guarantee?

o Sounds like magic, but reality is reality:


o The more downhill steps you need to escape a
local optimum, the less likely you are to ever
make them all in a row
o People think hard about ridge operators which
let you jump around the space in better ways
32
Genetic Algorithms

o Genetic algorithms use a natural selection metaphor


o Keep best N hypotheses at each step (selection) based on a fitness function
o Also have pairwise crossover operators, with optional mutation to give variety

o Possibly the most misunderstood, misapplied (and even maligned)


technique around
Example: N-Queens

o Why does crossover make sense


here?
o When wouldn’t it make sense?
o What would mutation be?
o What would a good fitness
function be?

You might also like