Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
66 views26 pages

AI Lecture 4

The document discusses different types of environments that intelligent agents may operate in, including fully vs partially observable, deterministic vs stochastic, episodic vs sequential, static vs dynamic, discrete vs continuous, and single-agent vs multi-agent environments. It provides examples and explanations of each type of environment.

Uploaded by

abanobkamal987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views26 pages

AI Lecture 4

The document discusses different types of environments that intelligent agents may operate in, including fully vs partially observable, deterministic vs stochastic, episodic vs sequential, static vs dynamic, discrete vs continuous, and single-agent vs multi-agent environments. It provides examples and explanations of each type of environment.

Uploaded by

abanobkamal987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Artificial Intelligence

Lecture 4
Intelligent Agents, &
AI Related Disciplines

Dr. Mahmoud Bassiouni


[email protected]
Lecture 4: IntelligentAgents, & AI Related Disciplines
2.1 Intelligent “Rational” Agents? 2.3 AI: Related Disciplines
▪ Recap: AI as the Study & Design of ▪ Learning Agents
Intelligent Agents
▪ AI vs. Machine Learning vs. Deep Learning
▪ Intelligent Agents in the World
▪ Machine Learning?
▪ An Example: Vacuum_Agent
▪ Datamining?
▪ Specifying the Task Environment [ PEAS ]
▪ AI vs. Data Science
▪ Goad-based vs. Cost-based Agents
▪ Why Data Science?

▪ Why Big Data?


2.2 Specifying the Task Environment

▪ Environment Types
Environment Types
o How to characterize the environment related to our tasks required
to be made. Usually, tasks exists in a specific environment.
o Type of Environment can affect different things. One of the
important things is the type of algorithm that can deal in this
environment.
o What kind of data needed to be represented. In other words, do I
have an internal state that contains data or stores data away from
the environment.
o Several questions exist need to be answered so that we can build an
intelligent agent that act in a specific environment.
o There exist several types of characteristics for the environment, but
we will study some of them.

3
Environment Partially
Fully Observable vs.
Types Observable
Deterministic vs. Stochastic
(vs. Strategic)
Episodic vs. Sequential
We will
study the Static vs. Dynamic
different (vs. Semi-Dynamic)
between Discrete vs. Continuous
them
Single-Agent vs. Multi-Agent

Known vs. Unknown

4
Fully Observable vs. Partialy Observable
o Do the agent's sensors give it access to the complete state of the
environment?
o For any given world state, are the values of all the variables
known to the agent?
o Sensors does not need to be the hardware sensors it can be any
thing that the agent sees and visualize the environment around it
and it can be software.
o Does the sensors of the agent give it all the required input or data so
that the agent can take the decision and enhance the performance
measure or utility function (maximize something (win) or minizine
something (loses or errors)
o Can the agent see the data entirely or not? or if there are some data
the agent cannot access it.

5
Fully Observable vs. Partialy Observable

vs.

6
Fully Observable vs. Partialy Observable
The image on left shows that
• Every input required is seen clearly.
• The balls or the players are seen clearly and the entire board is
visualized.
• No hidden information is found. (Fully Observable)

The image on the right shows that


• A group robots or autonomous drivers and there exist values to
variables that are not known.
• The robot cannot know what the other robot thinks in doing. He
does not have an sensor to detect that.
• The missing values will affect the entire performance. (Partially
Observable).

Some games are like this such as Poker, Cards games


There exist missing values or information. 7
Deterministicvs.Stochastic(vs.Strategic)
o There exist three important terms such as deterministic , stochastic and
a term between them called strategic.
o If an agent in a certain state and decides to move to the next state. Does
this depend on the state the agent was in it and the action the agent
made or there is a kind of indeterminism (randomness).
o The environment is deterministic (unique successor state given current
state and action). In other words the agent depends on the previous
state and current state and action.
o For example as X and O game.

8
Deterministicvs.Stochastic(vs.Strategic)
o The environment is said to stochastic (distribution over successor states
given current state and action), you feel that there exist some sort of
randomness. For example backgammon (you must have a factor
random that exists)
o Strategic: the environment is deterministic except for the actions of
other agents. It depends on actions from other agents and the strategies
they perform or do that affect in the environment.

9
Episodic vs. Sequential
o If the actions or decision of the agent are separated and not related
to each other.
• For example the spam filter takes an email and decide if it is a spam
or not a spam. If a mail is sent and it decides to be a spam, then
another email is sent and the agent said it is not a spam.
• Therefore, the actions of the agent are separated and not related to
previous or even the next decisions.
• Then, the environment is called Episodic.
• Episodic mean Is the agent’s experience

divided into unconnected single


decisions/actions 10
Episodic vs. Sequential
o Sequential means that the agent actions are connected such as
chess or packman each action you take will affect all the decisions
that will appear next.
• Who is easier Episodic or Sequence?
Of Course Episodic
Sequence requires some sort of planning (because the action made with not affect now, but it
will affect the next actions completely)

11
Staticvs.Dynamic(vs.Semi-dynamic)
o Main Question (Is the world changing while the agent is
thinking?)
o When the agent thinks the environment around it is constant or
fixed this is called Static. When you try to solve the rubik cube
the environment is static. When the agent thinks, the cube is in
the hand and will not change.

o The agent will think and the environment is constant or fixed.

12
Staticvs.Dynamic(vs.Semi-dynamic)
o In a dynamic environment Jerry when he tries to escape from Tom,
Jerry is not constant, but Tom moves. Therefore, for jerry there must
exist a flow of information about the position of Tom.

13
Staticvs.Dynamic(vs.Semi-dynamic)
o Also if you have an autonomous driver or a taxi in the street. The
environment around it is not fixed, but it is dynamic. When you think
what will you do, you also have other people thinking what will they do
such (People walking on the streets, Other car moving in different
directions, traffic lights turning red or green and blue, and so on).

o You should try different algorithms

to handle the dynamic environment.

14
Staticvs.Dynamic(vs.Semi-dynamic)
o The environment does not change, but with time the score
decreases. In a certain case you need to take an action, but the
input data are constant, environment does not change. If you took
time or you are late to do this action, the profit will be decreasing.
Semi-dynamic: The environment does not change with
the passage of time, but the agent's performance score does.
For example:
Chess with a clock

15
Discrete vs. Continuous
o This applies or depends on the precepts (data), time of actions,
description of the environment states. Are these factors discrete
or continuous?
o Are the actions continuous or discrete ?
o For example in (autonomous driver): you precept you always take
new information during moving in the street. Also, the Actions
are continuous using are always driving. Description of states: are
continuous (move in different streets, with different angles and
width). (continuous)

16
Discrete vs. Continuous
o For example (chess): In chess for example you precept the move
made the player. Also, the action is that you play once per turn
you all not playing continuously every 5 or 10 or 20 seconds, but
one movement every turn. Description of the state: discrete
blocks go to h8, h7, or a5.

17
Single–Agent vs. Multi–Agent
o Main question (Is an agent operating by itself in the
environment alone? Or you have lots of agent around you in
the environment)
o It is considered to be a trickly question because it depends
on how the agent is implemented?

18
Single–Agent vs. Multi–Agent
o In the autonomous driver, is it a single agent or multi-agent.
People can say it is a multi-agent because all the cars around it
if they have a performance measure they want to improve or
enhance and there action may help you or work against you
may be.
o In this case it is considered to be multi-agent because the
agents around you think not just only simulation. If they are
simulation (law of physics no thinking) then the autonomous
driver is considered a single agent.

19
Single–Agent vs. Multi–Agent
o Multi-agent environment can be competitive or cooperative
o In the autonomous agent example if the target of the first
agent and other agents to not collide then the agents are
cooperative because the agents will try to help each other
not to collide.
o If two agents and only one parking place over there. The
both agents will compete to reach the parking place first.
o It can be mix between them (cooperative and competitive).
Some agent can help and some agent can compete (by
taking a resource you want to take) with you to increase
there performance measure and decrease yours.

20
Known vs. Unknown
o Here, we describe the agent (known or unknown) does he
know the rule of the environment or not. If I want to move
from one state to another what should I do ?
o How I will win? Each move I make how many points I will
gain ?
o If I know all these, then the environment is known for the
agent, other wise the environment is unknown to the agent.

21
Known vs. Unknown
o Here the game Monopoly, it is known because the agent knows
the rules, know how to move from one state to another.
o The game on the right is unknown you do not know how to move
in it you are discovering the environment in order to take an
action. (unknown environment for the agent)

vs.

22
Examples of the different environments

Word Jumble Chess with Autonomous


Solver a Clock Driving

Observable Fully Fully Partially


Deterministic Deterministic Strategic Stochastic
Episodic Episodic Sequential Sequential
Static Static Semi-dynamic Dynamic
Discrete Discrete Discrete Continuous
Single agent Single Multi Multi

23
Environment Al Areas
o If the environment is (deterministic static) then an area in AI exist
called Searching algorithms to deal with it .
o If the environment is (deterministic sequential) then an area in AI
exist called Planning to deal with it.
o If the environment is (stochastic static) then an area in AI exist
called Uncertainty to deal with it.
o If the environment is (stochastic sequential) then an area in AI
exist called Decision Theory to deal with it.

24
Exercises
What have we learned?
▪ Is the environment of an autonomous taxi driver a competitive
multiagent environment or a cooperative multiagent environment?
▪ Characterize the following Task Environments: Poker Game – Medical
Diagnosis – Image Analysis – Interactive English Tutor.
▪ Is the distinction between known and unknown environments the same
as the one between fully observable and partially observable?
▪ Describe briefly the following statements (while giving an example):
▪ Episodic environments are much simpler than sequential
environments because the agent does not need to think ahead.
▪ Communication often emerges as a rational behavior in multiagent
environments.
THANK YOU

You might also like