Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
3 views34 pages

AI-Lecture-07-Adversarial Search

The document outlines the principles of adversarial search in artificial intelligence, focusing on game theory and the MiniMax algorithm for two-player games. It discusses the components of games, strategies for optimal decision-making, and techniques like Alpha-Beta pruning to enhance search efficiency. Key concepts include the evaluation of game states, the structure of game trees, and the implications of imperfect decisions in complex games.

Uploaded by

jeontaroy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views34 pages

AI-Lecture-07-Adversarial Search

The document outlines the principles of adversarial search in artificial intelligence, focusing on game theory and the MiniMax algorithm for two-player games. It discusses the components of games, strategies for optimal decision-making, and techniques like Alpha-Beta pruning to enhance search efficiency. Key concepts include the evaluation of game states, the structure of game trees, and the implications of imperfect decisions in complex games.

Uploaded by

jeontaroy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

ADVERSARIAL SEARCH

Course Code: CSC4226 Course Title: Artificial Intelligence and Expert System

Dept. of Computer Science


Faculty of Science and Technology

Lecture No: Six (6) Week No: Six(6) Semester:


Lecturer: SAZIA SHARMIN
Lecture Outline

1. Game and Its Components


2. Game as a Search Problem
3. Perfect Decision in Two Players Games
4. MiniMax Algorithm
5. Imperfect Decision
6. Cutoff Search
7. Alpha-Beta Pruning
Game Playing: Introduction

Competitive environments, in which the agents’ goals are in conflict, giving rise to
adversarial search problems—often known as games.

Mathematical game theory, a branch of economics, views any multiagent environment as


a game, provided that the impact of each agent on the others is “significant,” regardless of
whether the agents are cooperative or competitive

In AI, the most common games are of a rather specialized kind—what game theorists call
deterministic, turn-taking, two-player, zero-sum games of perfect information (such as
chess).

This means deterministic, fully observable environments in which two agents act
alternately and in which the utility values at the end of the game are always equal and
opposite.
Games a Search Problem

• Some games can normally be defined in the form of a


tree.

• Branching factor is usually an average of the possible


number of moves at each node.

• This is a simple search problem: a player must search this


search tree and reach a leaf node with a favorable
outcome.
Components of a Game
Initial state: Set-up specified by the rules, e.g., initial board configuration of
chess.
Player(s): Defines which player has the move in a state.
Actions(s): Returns the set of legal moves in a state.
Result(s , a): Transition model defines the result of a move.
(2nd ed.: Successor function: list of (move , state) pairs specifying legal moves.)
Terminal-Test(s): Is the game finished? True if finished, false otherwise.
Utility function(s , p): Gives numerical value of terminal state s for player p.
E.g., win (+1), lose (-1), and draw (0) in tic-tac-toe.
E.g., win (+1), lose (0), and draw (1/2) in chess.
Two Player Game

Two players: Max and Min

Objective of both Max and Min to optimize winnings


Max must reach a terminal state with the highest utility
Min must reach a terminal state with the lowest utility

Game ends when either Max and Min have reached a terminal state

upon reaching a terminal state points maybe awarded or sometimes


deducted
Search Problem Revisited

Simple problem is to reach a favorable terminal state

Problem Not so simple...


Max must reach a terminal state with as high a utility as
possible regardless of Min’s moves

Max must develop a strategy that determines best possible move for
each move Min makes.
Tic-Tac-Toe Revisited
Example: Two-Ply Game

Minimax Decision - maximizes the utility for Max based on


the assumption that Min will attempt to Minimize this utility.
Minimax Algorithm

Minimax Algorithm determines optimum strategy


for Max:
1. Generate the whole game tree, down to the leaves.
2. Apply utility (payoff) function to each leaf.
3. Use the utility of the terminal states to determine the utility of the
nodes one level higher up in the search tree. Back-up values from
leaves through branch nodes:
a Max node computes the Max of its child values
a Min node computes the Min of its child values
4. At root: choose the move leading to the child of highest value.
MiniMax Pseudocode
Two-Ply Game Tree
Two-Ply Game Tree
Two-Ply Game Tree
Two-Ply Game Tree
Properties of minimax
Complete?
Yes (if tree is finite).
Optimal?
Yes (against an optimal opponent).

Time complexity?
O(bm)
Space complexity?
O(bm) (depth-first search, generate all actions at once)
O(m) (backtracking search, generate actions one at a time)
Imperfect Decisions

• Many game produce very large search trees.

• Cutoffs must be implemented due to time restrictions,


Cutoff Search
Cutting of searches at a fixed depth dependent on time

The deeper the search the more information is available to the program
the more accurate the evaluation functions

Iterative deepening – when time runs out the program returns the deepest
completed search.

Is searching a node deeper better than searching more nodes?


Pruning

What is pruning?
The process of eliminating a branch of the search tree
from consideration without examining it.

Why prune?
To eliminate searching nodes that are potentially
unreachable.
To speedup the search process.
Alpha-Beta Pruning
A technique to find the optimal solution according to a
limited depth search using evaluation functions.

Returns the same choice as minimax cutoff decisions but


examines fewer nodes.

Gets its name from the two variables that are passed along
during the search which restrict the set of possible
solutions.
Alpha-beta: Definitions

Alpha –
the value of the best choice so far along the path for MAX.

Beta –
the value of the best choice (lowest value) so far along the
path for MIN.
Implementation

Set root node alpha to negative infinity and beta to positive


infinity.

Search depth first, propagating alpha and beta values down to all
nodes visited until reaching desired depth.

Apply evaluation function to get the utility of this node.


Implementation (Cont’d)
The Max player will only update the value of alpha.

The Min player will only update the value of beta.

While backtracking the tree, the node values will be passed to


upper nodes instead of values of alpha and beta.

We will only pass the alpha, beta values to the child nodes.

Prune whenever  ≥ .
Alpha-Beta Example

α= 3 α= -∞
3
β= ∞ β= ∞
α= 3 α= 3
β= 1 β= ∞
α= -∞ α= -∞ 1
β= 3 β= ∞ 3
α=α=
5 -∞ α= 3
β=β=
3 3 β= ∞
α=
α=2-∞
α=β=
3∞
β= ∞ 3 5 1
β= ∞
General alpha-beta pruning

Consider a node n in the tree ---

If player has a better choice at:

Parent node of n
Or any choice point further up

Then n will never be reached in play.

Hence, when that much is known about n, it can be


pruned.
Alpha-Beta Search Algorithm
Effectiveness of Alpha-Beta Search
Worst-Case
branches are ordered so that no pruning takes place. In this case alpha-beta gives no
improvement over exhaustive search
Best-Case
each player’s best move is the left-most child (i.e., evaluated first)
in practice, performance is closer to best rather than worst-case
E.g., sort moves by the remembered move values found last time.
E.g., expand captures first, then threats, then forward moves, etc.
E.g., run Iterative Deepening search, sort by value last iteration.
In practice often get O(b(d/2)) rather than O(bd)
this is the same as having a branching factor of sqrt(b),
(sqrt(b))d = b(d/2),i.e., we effectively go from b to square root of b
e.g., in chess go from b ~ 35 to b ~ 6
this permits much deeper search in the same amount of time
Example
Answer to Example
Second Example
Answer
Final Comments about
Alpha-Beta Pruning
Pruning does not affect final results

Entire subtrees can be pruned.

Good move ordering improves effectiveness of pruning

Repeated states are again possible.


Store them in memory = transposition table
Problems

If there is only one legal move, this algorithm will still generate
an entire search tree.
Designed to identify a “best” move, not to differentiate
between other moves.
Overlooks moves that forfeit something early for a better
position later.
Evaluation of utility usually not exact.
Assumes opponent will always choose the best possible move.
References

1. Chapter 5: Adversarial Search , Pages 161-176


“Artificial Intelligence: A Modern Approach,” by Stuart J. Russell and Peter Norvig,
2. Alpha-Beta Examples | Artificial Intelligence - Fall 2023
3. HW2_written_sol.pdf
4. Alpha Beta pruning - Scaler Topics

You might also like