Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views52 pages

Unit 1 Part 2 11 Sept

This document provides an introduction to Artificial Intelligence (AI) focusing on intelligent agents, their types, and their environments. It outlines the characteristics of agents, including learning capabilities, and describes various agent types such as simple reflex agents and model-based reflex agents. Additionally, it introduces the PEAS (Performance, Environment, Actuators, Sensors) framework for defining agent environments and discusses the importance of decision-making based on percepts and goals.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views52 pages

Unit 1 Part 2 11 Sept

This document provides an introduction to Artificial Intelligence (AI) focusing on intelligent agents, their types, and their environments. It outlines the characteristics of agents, including learning capabilities, and describes various agent types such as simple reflex agents and model-based reflex agents. Additionally, it introduces the PEAS (Performance, Environment, Actuators, Sensors) framework for defining agent environments and discusses the importance of decision-making based on percepts and goals.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Artificial Intelligence (KCS-071A)

Unit 1
Part 2
Introduction to AI

Dr Rashmi Sharma Session 2024-2025


Topics to be covered...
• Agents & It's Types
• Agent Environment in AI
• Characteristics and application of Learning Agents
• Problem Solving Approach to Typical AI problems

Dr Rashmi Sharma Session 2024-2025


Definition of Intelligent Agent
•An Agent is anything that can be viewed as perceiving its environment
through Sensors & acting upon that environment through Actuators.
•Agent consist sensors that percepts environment and then upon some
conditions actuator take some actions in the environment.
Following are the main four rules for an AI agent:
Rule 1: An AI agent must have the ability to perceive the environment.
Rule 2: The observation must be used to make decisions.
Rule 3: Decision should result in an action.
Rule 4: The action taken by an AI agent must be a rational action.

Dr Rashmi Sharma Session 2024-2025


Definition of Intelligent Agent
•A Human Agent has eyes , ears, & other organs for sensing and hands, legs &
other body parts for actuators.
•A Robotic Agent might have camera & infrared range finders for sensors &
various motors for actuators.

Dr Rashmi Sharma Session 2024-2025


Definition of Intelligent Agent
•Percept - The agent’s perceptual inputs at any given instant
•An agents percept sequence is the complete history of everything the agent
has ever perceived.
•If we can specify the agent’s choice of action for every possible perception
sequence, then we have said more or less everything there is to say about the
agent.
•Mathematically the agent’s behaviour is described by an agent function that
maps any given perception sequence to an action.
•Internally, the agent function for an artificial agent will be implemented by
an agent program.
Dr Rashmi Sharma Session 2024-2025
Definition of Intelligent Agent
•A Rational Agent is one that does the right thing conceptually speaking, every
entry in the table for the agent auction is filled out correctly.
•Obviously, doing the right thing is better than doing the wrong thing, but what
does it mean to do the right thing.
•As the first approximation, we will say that the right action is the one that will
cause the agent to be most successful.
•A Performance Measure embodies the criteria for the success of an agent’s
behaviour.

Dr Rashmi Sharma Session 2024-2025


Definition of Intelligent Agent
•When an agent is plunked down in an environment, it generates a sequence of
actions according to the percepts it receives.
•This sequence of action causes the environment go through a sequence of
states.
•If the sequence is desirable, then the agent has performed well.
•Specifying the task environment: For an agent, we had to specify
performance measures, the environment & the agent’s actuators & Sensors.
•Acronymically we call these as PEAS ( Performance, Environment, Actuators,
Sensors) Description.

Dr Rashmi Sharma Session 2024-2025


Learning Agent
A learning agent in AI is the type of agent which can learn from
its past experiences, or it has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically
through learning.

A learning agent has mainly four conceptual components, which are:


1.Learning element: It is responsible for making improvements by learning
from environment
2.Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
3.Performance element: It is responsible for selecting external action
4.Problem generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.
Dr Rashmi Sharma Session 2024-2025
Agent Program - Learning Agent

Dr Rashmi Sharma Session 2024-2025


PEAS Description Examples
•PEAS ( Performance, Environment, Actuators, Sensors) Description Examples
•Taxi driver -
•Performance- safe, fast, legal, comfortable trip, maximise profit
•Environment- Road, Other traffic, pedestrians, customers
•Actuators- Steering, Accelerators, Brakes, Signal, Horn, Display
•Sensors- Camera, sonar, Speedometer, GPS, Odometer, engine sensors,
accelerometer, keyboard
•Medical Diagnosis System -
•Performance- Healthy Patient, Minimise cost, lawsuits
•Environment- patient, staff, hospital
•Actuators- Display Questions, Test, Diagnosis treatments & referrals
•Sensors- keyboard entry of symptoms, findings, patient answers

Dr Rashmi Sharma Session 2024-2025


PEAS Description Examples
•Satellite Image Analysis System -
•Performance- Correct image categorisation
•Environment- Downlink from orbiting satellite
•Actuators- Display categorisation of scene
•Sensors- Color pixel arrays
•Part-picking Robot -
•Performance- %age of parts in correct bins
•Environment- Conveyor belt with parts, bins
•Actuators- Jointed arm & hand
•Sensors- Camera, Joint angle sensors

Dr Rashmi Sharma Session 2024-2025


PEAS Description Examples
•Refinery Controller -
•Performance- Maximise purity, yield safety
•Environment- Refinery operators
•Actuators- Values, pumps, heaters, displays
•Sensors- temperature, pressure, Chemical sensors
•Interactive English Tutor -
•Performance- Maximise student’s score on test
•Environment- set of students, testing agency
•Actuators- Display exercises, suggestions, correctness
•Sensors- Keyboard entry

Dr Rashmi Sharma Session 2024-2025


Structure of Agents
•The job of agent is to design the agent program that implements the agent function
mapping percepts to actions.
•We assume this program will run on some sort of computing device with physical
sensors and actuators.
•Agent = architecture + program
•Obviously the program we choose should be appropriate for the architecture. Eg. If a
program is going to recommend actions like Walk, the architecture should better have
legs.
•In general, the architecture makes the percepts from the sensors available the
program, runs the program & feeds the program’s action choices not the actuators as
they generated

Dr Rashmi Sharma Session 2024-2025


Agent Program
•The agent programs that we will design will have same skeleton/structure
1. They take the current percept as input from the sensors
2. Returns an action to the actuators.
•The agent program takes the current perception as input because nothing more is
available from the environment.
•If the agent’s action depends on the entire perception sequence, the agent will have to
remember the percepts.

Dr Rashmi Sharma Session 2024-2025


Agent Program
•Algorithm : Table_Driven_Agent (percept) returns an action
•Static: percepts, a sequence, initially empty
Table, a table of actions, indexed by percept sequences, initially fully specified
Append percept to the end of percepts
Action <- Lookup ( percepts, table)
Return action.
The table represents explicitly the agent function that the agent program embodies.
Let P be the set of possible percepts
Let T be the life time of the agent
The lookup table will contain summation of all P entries

Dr Rashmi Sharma Session 2024-2025


Agent Program
•Consider the automated taxi, the visual input from a single camera comes in at the rate
of roughly 27 megabytes/ second.
•This gives a lookup table with over 10 exp 250,000,000,000 ( 250 billion) entries for
an hour’s driving
•The lookup table for chess- a tiny well behaved fragment of the real world- would have
at least 10 exp 150 entries.

Drawbacks of Look Table :


1. No physical agent in this universe will have the space to store the table.
2. No agent could ever learn all the right table entries from its experience
3. The designer would not have time to create the table.
4. Even if the environment is simple enough to yield a feasible table size, the
designer still has no guidance about how to fill in the table entries.

Dr Rashmi Sharma Session 2024-2025


Learning Agent Characteristics
1. Situatedness: When an agent receive some form of sensory input from its environment,
it then perform some actions that change its environment in some way.

2. Autonomy: This agent characteristics means that an agent is able to act without
direct intervention from humans or other agents this type of agent has almost
complete control over its own actions and internal state

3. Adaptivity: This agent characteristics means that it is capable of reacting flexibility


to changes within its environment.

4. Sociability: This type of characteristics means that the agent is capable of interacting
in a peer to peer manner with other agents or humans.

Dr Rashmi Sharma Session 2024-2025


Agent Environment in AI
An environment is everything in the world which surrounds the agent, but it is not
a part of an agent itself.
An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with
something to sense and act upon it.
An environment is mostly said to be non-feministic.

Dr Rashmi Sharma Session 2024-2025


Agent Environment in AI
Features of Environment
1.Fully observable vs Partially Observable
2.Static vs Dynamic
3.Discrete vs Continuous
4.Deterministic vs Stochastic
5.Single-agent vs Multi-agent
6.Episodic vs sequential
7.Known vs Unknown
8. Accessible vs Inaccessible

Dr Rashmi Sharma Session 2024-2025


Agent Environment in AI
1.Fully observable vs Partially Observable
If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially observable.

An agent with no sensors in all environments then such an environment is called


as unobservable.
2.Static vs Dynamic
If the environment can change itself while an agent is deliberating then such environment is
called a dynamic environment else it is called a static environment.

Static environments are easy to deal because an agent does not need to continue looking at
the world while deciding for an action.
Example: Taxi driving(DE), Crossword puzzles(SE).
Dr Rashmi Sharma Session 2024-2025
Agent Environment in AI
3. Deterministics vs Stochastic
If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.

A stochastic environment is random in nature and cannot be determined completely by an


agent.

4. S i n g l e a g e n t vs Multi-agent
If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.

However, if multiple agents are operating in an environment, then such an environment is


called a multi-agent environment.

Dr Rashmi Sharma Session 2024-2025


Agent Environment in AI
5. Accessible vs Inaccessible
If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called inaccessible.

An empty room whose state can be defined by its temperature is an example of an


accessible environment.

Information about an event on earth is an example of Inaccessible environment.

Dr Rashmi Sharma Session 2024-2025


Agent Types
1.Simple reflex agent
2.Model-based reflex agent
3.Goal-based agents
4.Utility-based agents

Dr Rashmi Sharma Session 2024-2025


Agent Program- Simple Reflex Agent
1.Simple Reflex Agent

•The Simple reflex agents are the simplest agents.


•These agents take decisions on the basis of the current percepts and
ignore the rest of the percept history.
•The Simple reflex agent works on Condition-action rule, which means it
maps the current state to action.
•Based on if then rules
•Environment should be fully observable( if the agent know about the complete
state)

Dr Rashmi Sharma Session 2024-2025


Agent Program - Simple Reflex Agent

Dr Rashmi Sharma Session 2024-2025


Agent Program- Simple Reflex Agent
•Such as a Room Cleaner agent, it works only if there is dirt in the room.
•Eg. Imagine yourself as the driver of the automated taxi. If the car in front brakes &
its brake lights come on then you should notice this & initiate breaking.
•We call this connection a condition-action rule or situation action rules written as

If car-in-front-is-braking then initiate-braking

Dr Rashmi Sharma Session 2024-2025


Agent Program- Simple Reflex Agent
Algorithm : Simple - Refelx-Agent (percept) returns an action
Satic : Rules, a set of condition-action rules
State<- INTERPRET-INPUT ( percept)
Rule <- RULE-MATCH (state, rules)
Action <- RULE-ACTION [rule]
Return action
INTERPRET-INPUT function generates an abstracted description of
the current state from the percept
RULE-MATCH function returns the first rule in the set of rules
that matches the given state description

Dr Rashmi Sharma Session 2024-2025


Agent Program- Simple Reflex Agent
Simple reflex agents have the admirable property of being simple, but they turn out
to be very limited intelligence.

The agent will work only if the correct decision can be made on the basis of only the
current percept - that is only if the environment is fully observable

Even a little bit of unobservability can cause serious trouble.

Problems for the simple reflex agent design approach:


•They have very limited intelligence
•They do not have knowledge of non-perceptual parts of the current state
•Mostly too big to generate and to store.
•Not adaptive to changes in the environment.

Dr Rashmi Sharma Session 2024-2025


Agent Program
2. Model-based Reflex Agent
The Model-based agent can work in a partially observable
environment, and track the situation.
A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so
it is called a Model- based agent.
Internal State: It is a representation of the current state based on
percept history.

These agents have the model, "which is knowledge of the world" and
based on the model they perform actions.
Dr Rashmi Sharma Session 2024-2025
Agent Program-Model-based Reflex Agent
•The agent should maintain some sort of internal state that
depends on the perception history & there by reflects some of
the unobserved aspects of the current state.
•Updating this internal state information as time goes by requires
two kinds of knowledge be encoded in the agent program.
•FIRST, we need some information about how the world evolves
independently of the agent - eg. that an overtaking car generally
will be closer behind that it was a moment ago.
•SECOND, we need some information about how the agent’s own
actions affect the world. This knowledge about how the world
works - whether implemented in simple boolean circuits or in
complete scientific theories is called a model of the world. An
agent that uses such a model is called model based agent.
Dr Rashmi Sharma Session 2024-2025
Agent Program - Model-Based Reflex Agent

Dr Rashmi Sharma Session 2024-2025


Agent Program- Model-based Reflex Agent
Algorithm : Reflex- Agent-with-State (percept) returns an action
Static : State a description of the current world, state rules, a set of condition
rules action, the most recent actions, initially none.
State<- UPDATE-STATE ( state, action)
Rule <- RULE-MATCH (state, rules)
Action <- RULE-ACTION [rule]
Return action
UPDATE-STATE function is responsible for creating the new internal state
description. As well as the new perception in the light of existing
knowledge about the state, it uses information about how the world
evolves to keep track of the unseen parts of the world & also must
know about what the agent’s actions do to the state of the world

Dr Rashmi Sharma Session 2024-2025


Agent Program
3. Goal-based Reflex Agent
•The knowledge of the current state environment is not always sufficient
to decide for an agent to what to do.
•The agent needs to know its goal which describes desirable situations.
•Eg at a road function, the taxi can turn left then right or go straight.
The correct decision depends on where the taxi is trying to go.
•Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information.
•They choose an action, so that they can achieve the goal

Dr Rashmi Sharma Session 2024-2025


Agent Program- Goal-based Reflex Agent
• These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and
planning, which makes an agent proactive.
•The agent program can combine this with information about the
results of possible actions in order to choose actions that
achieve the goal.
•Sometimes goal-based action selection is straight forward,
when goal satisfaction results immediately from a single action.
•Sometimes it will be more tricky, when the agent has to
consider long sequences of twists & turns to find a way to
achieve the goal.
Dr Rashmi Sharma Session 2024-2025
Agent Program - Goal-Based Reflex Agent

Dr Rashmi Sharma Session 2024-2025


Agent Program- Goal-based Reflex Agent
•A goal-based agent in principle could reason that if a car in
front has its brake light on, it will slow down.
•Given the way the world usually evolves, the only action that
will achieve the goal by not hitting other cars is to brake.
•Although the goal-based agent appears less efficient, it is
more flexible because the knowledge that supports its
decisions is represented explicitly & can be modified.
•If it starts to rain, the agent can update its knowledge of how
efficiently its brakes will operate, this will automatically can
cause all of the relevant behaviour to be altered to suit the
new condition.

Dr Rashmi Sharma Session 2024-2025


Agent Program
4. Utility-based Reflex Agent
•These agents are similar to the goal-based agent.
•Utility-based agent act based not only goals but also the best way to achieve the goal.
•The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
•The utility function maps each state to a real number to check how efficiently each action
achieves the goals.

Dr Rashmi Sharma Session 2024-2025


Agent Program- Utility-based Reflex Agent
• Goals alone are not really enough to generally high- quality
behaviour in most environments.
•Eg there are may action sequences that will get the taxi to
its destination but some are quicker, safer, more reliable or
cheaper than others.
•Goals just provide a binary distinguish between “happy” or
“unhappy” states. Because happy doesn’t sound very
scientific, the customary terminology is to say that if one
world state is preferred to another, then it has higher utility
for the agent.

Dr Rashmi Sharma Session 2024-2025


Agent Program - Utility-Based Reflex Agent

Dr Rashmi Sharma Session 2024-2025


Agent Program- Utility-based Reflex Agent
• A utility function maps a state onto a real number which
describes the associated degree of happiness. A complete
specification of the utility function allows rational decisions
in two kinds of cases where goals are inadequate.
•FIRST, when there are conflicting goals, only some of which
can be achieved, the utility function specifies the
appropriate tradeoff.
•SECOND, when there are several goals that agent can aim
for, none of which can be achieved with certainty, utility
provides a way in which the likelihood of success can be
weighed up against the importance of the goal.

Dr Rashmi Sharma Session 2024-2025


Applications of Learning Agents
• Classification
•Prediction
•Search Engines
•Computer Vision
•Self driving car
•Recognition of gestures.

Dr Rashmi Sharma Session 2024-2025


Computer Vision
• Vision is the act or power of sensing with the eyes: sight
•Computer vision is a field that includes methods for acquiring,
processing, analysing & understanding images
•In general, high-dimensional data from the real world in order
to product numerical or symbolic information eg. in the forms of
decisions.
•A theme in the development of this field has been to duplicate
the abilities of human vision by electronically perceiving &
understanding an image.
•This image understanding can be seen as disentangling of
symbolic information from image data using model constructed
with the aid of geometry, physics, statistics, & learning theory.
Dr Rashmi Sharma Session 2024-2025
Computer Vision
• Applications range from tasks such as industrial machine vision systems which
say, inspect bottles speeding by on a production line, to research into AI &
computers or robots that can comprehend the world around them.
•Computer vision covers the core technology of automated image analysis
which is used in many fields.
•Examples of applications of Computer Vision System include:
•Controlling process - an industrial robot
•Navigation - By an autonomous vehicle or mobile robot
•Detecting events- for visual surveillance or people counting
•Organising information - for indexing databases of images & image sequences
•Interaction- as the input to a device for computer human interaction
•Automatic inspection - in manufacturing applications (animated movies, 3D-
games)
Dr Rashmi Sharma Session 2024-2025
Computer Vision Methods
• The organization of the computer vision system is highly application
dependent.
•The specific implementation a computer vision system also depends on
if its functionality is pre-specified or if some part of it can be learned
or modified during operations.
•Some typical systems found in computer vision system are :
•Image acquisition- A digital image is produced by one or several image
sensors, which besides various types of light-sensitive cameras, include
range sensors, tomography devices, radar, ultra-sonic cameras etc.
•Pre-Processing- before a computer vision method can be applied to
image data in order to extract some specific piece of information, it is
usually necessary to process the data in order to assure that it satisfies
certain assumptions implied by the method.
Dr Rashmi Sharma Session 2024-2025
Computer Vision Methods
• Feature extraction- Image feature at various levels of
complexing are extracted from the image data eg. lines, edges &
ridges, localised interest points such as corners, blobs or points.
•Detection/Segmentation- At some point in the processing a
decision is made above which image points or regions of the
image are relevant for further processing.
•High-Level Processing- Eg includes image recognition, classifying
a detected object into different categories & image registration :
comparing & Combining two different views of the same object.
•Decision making- Making the final decision required for the
application, for example pass/fail on automatic inspection
applications, match/no match in recognition application etc.
Dr Rashmi Sharma Session 2024-2025
Computer Vision Applications
• Medical computer vision or Medical image processing: This
area is characterised by the extraction of information from
image data for the purpose of making a medical diagnosis of
a patient. The image data is in the form of microscopy
images, X-ray images, angiography images, ultrasonic images
& tomography images.
•Machine Vision: The information is extracted for the
purpose of supporting a manufacturing processes e.g. quality
control where details or final products are being
automatically inspected in order to find defects.

Dr Rashmi Sharma Session 2024-2025


Computer Vision Applications
• Military applications : The detection of enemy soldiers or
vehicles & missile guidance.Modern military concepts such as
“battle field awareness” imply that various sensors, including
image sensors, provide a rich set of information about a
combat scenes which can be used to support strategic
decisions.
•Autonomous Vehicles: submersibles, land-based vehicles
(small robots with wheels, cars, or trucks) aerial vehicles &
unmanned aerial vehicles.

Dr Rashmi Sharma Session 2024-2025


Natural Language Processing
• Natural language is human language.
•NLP use AI to allow a user to communicate with a computer in
the user’s natural language.
•The computer can both understand & respond to commands
given in a natural language.
•Computer language are artificial language, invented for the sake
of communicating instructions to computers& enabling them to
communicate with each other.
•Most computer language consists of a combinations of symbols,
numbers, & some words.
•By programming computers to respond our natural language, we
make them easier to use.
Dr Rashmi Sharma Session 2024-2025
Natural Language Processing
• There are many problems trying to make a computer understand
people.
•Four problems arise that can cause misunderstanding
1.Ambiguity - Confusion over what is meant due to multiple meanings
of words & phrases.
2.Imprecision- Thoughts are sometimes expressed in vague & inexact
terms
3.Incompleteness- the entire ideas not presented & the listener is
expected to “read between the lines”.
4.Inaccuracy - spelling, punctuation, & grammer problems can obscure
meaning. It is even more difficult for computers, which have no share
at all in the real-world relationships that confer meaning upon
information, to correctly interpret natural language.
Dr Rashmi Sharma Session 2024-2025
Natural Language Processing
• To alleviate these problems, NLP programs seek to analyse
syntax- the way words are put together in a sentence or phrase,
semantics, the derived meaning of the phrase or sentence &
context the meaning of the distinct words within the sentence.
•The computers must also have access to a dictionary which
contains definitions of every word & phrase it is likely to
encounter & may also use keyword analysis- a pattern matching
technique in which the program scans the text, looking for
words that it has been programmed to recognise.
•Ex. Computerised card catalog available in many public libraries.
The main menu usually offers four choices for looking up
information: search by author, by title, by subject or by
keyboard.
Dr Rashmi Sharma Session 2024-2025
Natural Language Processing- Steps
1.Morphological analysis - Individual words are analysed into their
components & non-word tokens, such as punctuation are separated from
words
2.Syntactic Analysis - Linear sequence of words are transformed into
structures that how the words relate to each other. Some word
sequences may be rejected if they violate the language rules for how
words may be combined
3.Semantic Analysis- The structures created by the syntactic analyses are
assigned meanings.
4.Discourse Integration- The meaning of an individual sentences may
depend on on the sentences that precede it & may influence the
meanings of the sentences that follow it.
5.Pragmatic Analysis- The structure representing what was said is
reinterpreted to determine that what was actually meant.
Dr Rashmi Sharma Session 2024-2025
Natural Language Processing- Goals
• The goals of NLP as stated above is “to accomplish human-like language
processing.
•The choice of the word processing is very deliberate & should not be
replaced with understanding.
•For although the field of NLP was originally referred to as natural
language understanding (NLU)in the early days of AI, ir is well agreed
today that while the goal of NLP is trueMLU, that goal has not yet been
accomplished.
•A full system would be able to :
1.Paraphrase an input text
2.Translate the text into another language
3.Answer questions about the contents of the text
4.Draw inferences from the text.
Dr Rashmi Sharma Session 2024-2025

You might also like